NASA Astrophysics Data System (ADS)
Asinari, Pietro
2010-10-01
.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html. PMID:16986253
MatLab Script and Functional Programming
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali
2007-01-01
MatLab Script and Functional Programming: MatLab is one of the most widely used very high level programming languages for scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. The MatLab seminar covers the functional and script programming aspect of MatLab language. Specific expectations are: a) Recognize MatLab commands, script and function. b) Create, and run a MatLab function. c) Read, recognize, and describe MatLab syntax. d) Recognize decisions, loops and matrix operators. e) Evaluate scope among multiple files, and multiple functions within a file. f) Declare, define and use scalar variables, vectors and matrices.
Ada programming guidelines for deterministic storage management
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
Previous reports have established that a program can be written in the Ada language such that the program's storage management requirements are determinable prior to its execution. Specific guidelines for ensuring such deterministic usage of Ada dynamic storage requirements are described. Because requirements may vary from one application to another, guidelines are presented in a most-restrictive to least-restrictive fashion to allow the reader to match appropriate restrictions to the particular application area under investigation.
MatLab Programming for Engineers Having No Formal Programming Knowledge
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
MatLab is one of the most widely used very high level programming languages for Scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. Also, stated are the current limitations of the MatLab, which possibly can be taken care of by Mathworks Inc. in a future version to make MatLab more versatile.
QUBIT4MATLAB V3.0: A program package for quantum information science and quantum optics for MATLAB
NASA Astrophysics Data System (ADS)
Tóth, Géza
2008-09-01
A program package for MATLAB is introduced that helps calculations in quantum information science and quantum optics. It has commands for the following operations: (i) Reordering the qudits of a quantum register, computing the reduced state of a quantum register. (ii) Defining important quantum states easily. (iii) Formatted input and output for quantum states and operators. (iv) Constructing operators acting on given qudits of a quantum register and constructing spin chain Hamiltonians. (v) Partial transposition, matrix realignment and other operations related to the detection of quantum entanglement. (vi) Generating random state vectors, random density matrices and random unitaries. Program summaryProgram title:QUBIT4MATLAB V3.0 Catalogue identifier:AEAZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAZ_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:5683 No. of bytes in distributed program, including test data, etc.: 37 061 Distribution format:tar.gz Programming language:MATLAB 6.5; runs also on Octave Computer:Any which supports MATLAB 6.5 Operating system:Any which supports MATLAB 6.5; e.g., Microsoft Windows XP, Linux Classification:4.15 Nature of problem: Subroutines helping calculations in quantum information science and quantum optics. Solution method: A program package, that is, a set of commands is provided for MATLAB. One can use these commands interactively or they can also be used within a program. Running time:10 seconds-1 minute
NASA Astrophysics Data System (ADS)
Lachhwani, Kailash; Nehra, Suresh
2015-09-01
In this paper, we present modified fuzzy goal programming (FGP) approach and generalized MATLAB program for solving multi-level linear fractional programming problems (ML-LFPPs) based on with some major modifications in earlier FGP algorithms. In proposed modified FGP approach, solution preferences by the decision makers at each level are not considered and fuzzy goal for the decision vectors is defined using individual best solutions. The proposed modified algorithm as well as MATLAB program simplifies the earlier algorithm on ML-LFPP by eliminating solution preferences by the decision makers at each level, thereby avoiding difficulties associate with multi-level programming problems and decision deadlock situation. The proposed modified technique is simple, efficient and requires less computational efforts in comparison of earlier FGP techniques. Also, the proposed coding of generalized MATLAB program based on this modified approach for solving ML-LFPPs is the unique programming tool toward dealing with such complex mathematical problems with MATLAB. This software based program is useful and user can directly obtain compromise optimal solution of ML-LFPPs with it. The aim of this paper is to present modified FGP technique and generalized MATLAB program to obtain compromise optimal solution of ML-LFP problems in simple and efficient manner. A comparative analysis is also carried out with numerical example in order to show efficiency of proposed modified approach and to demonstrate functionality of MATLAB program.
MatLab program for precision calibration of optical tweezers
NASA Astrophysics Data System (ADS)
Tolić-Nørrelykke, Iva Marija; Berg-Sørensen, Kirstine; Flyvbjerg, Henrik
2004-06-01
Optical tweezers are used as force transducers in many types of experiments. The force they exert in a given experiment is known only after a calibration. Computer codes that calibrate optical tweezers with high precision and reliability in the ( x, y)-plane orthogonal to the laser beam axis were written in MatLab (MathWorks Inc.) and are presented here. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum. First, cross-talk between channels in 2D position measurements is tested for, and eliminated if detected. Then, the Lorentzian power spectrum that results from the Einstein-Ornstein-Uhlenbeck theory, is fitted to the low-frequency part of the experimental spectrum in order to obtain an initial guess for parameters to be fitted. Finally, a more complete theory is fitted, a theory that optionally accounts for the frequency dependence of the hydrodynamic drag force and hydrodynamic interaction with a nearby cover slip, for effects of finite sampling frequency (aliasing), for effects of anti-aliasing filters in the data acquisition electronics, and for unintended "virtual" filtering caused by the position detection system. Each of these effects can be left out or included as the user prefers, with user-defined parameters. Several tests are applied to the experimental data during calibration to ensure that the data comply with the theory used for their interpretation: Independence of x- and y-coordinates, Hooke's law, exponential distribution of power spectral values, uncorrelated Gaussian scatter of residual values. Results are given with statistical errors and covariance matrix. Program summaryTitle of program: tweezercalib Catalogue identifier: ADTV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV Computer for
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
Vergara-Perez, Sandra; Marucho, Marcelo
2015-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
NASA Astrophysics Data System (ADS)
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.
A Matlab Program for Textural Classification Using Neural Networks
NASA Astrophysics Data System (ADS)
Leite, E. P.; de Souza, C.
2008-12-01
A new MATLAB code that provides tools to perform classification of textural images for applications in the Geosciences is presented. The program, here coined TEXTNN, comprises the computation of variogram maps in the frequency domain for specific lag distances in the neighborhood of a pixel. The result is then converted back to spatial domain, where directional or ominidirectional semivariograms are extracted. Feature vectors are built with textural information composed of the semivariance values at these lag distances and, moreover, with histogram measures of mean, standard deviation and weighted fill-ratio. This procedure is applied to a selected group of pixels or to all pixels in an image using a moving window. A feed- forward back-propagation Neural Network can then be designed and trained on feature vectors of predefined classes (training set). The training phase minimizes the mean-squared error on the training set. Additionally, at each iteration, the mean-squared error for every validation is assessed and a test set is evaluated. The program also calculates contingency matrices, global accuracy and kappa coefficient for the three data sets, allowing a quantitative appraisal of the predictive power of the Neural Network models. The interpreter is able to select the best model obtained from a k-fold cross-validation or to use a unique split-sample data set for classification of all pixels in a given textural image. The code is opened to the geoscientific community and is very flexible, allowing the experienced user to modify it as necessary. The performance of the algorithms and the end-user program were tested using synthetic images, orbital SAR (RADARSAT) imagery for oil seepage detection, and airborne, multi-polarimetric SAR imagery for geologic mapping. The overall results proved very promising.
Enhancing Student Writing and Computer Programming with LATEX and MATLAB in Multivariable Calculus
ERIC Educational Resources Information Center
Sullivan, Eric; Melvin, Timothy
2016-01-01
Written communication and computer programming are foundational components of an undergraduate degree in the mathematical sciences. All lower-division mathematics courses at our institution are paired with computer-based writing, coding, and problem-solving activities. In multivariable calculus we utilize MATLAB and LATEX to have students explore…
ERIC Educational Resources Information Center
Ocak, Mehmet A.
2006-01-01
This correlation study examined the relationship between gender and the students' attitude and prior knowledge of using one of the mathematical software programs (MATLAB). Participants were selected from one community college, one state university and one private college. Students were volunteers from three Calculus I classrooms (one class from…
Supporting image algebra in the Matlab programming language for compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Wilson, Joseph N.; Hayden, Eric T.
2009-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision programs. The University of Florida has been associated with implementations supporting the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved the implementation of a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the MatlabTM programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, this new implementation offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that the image algebra Matlab (IAM) library can employ versatile representations for the operands and operations of the algebra. In this paper, we first outline the purpose and structure of image algebra, then present IAM notation in relationship to the preceding (IAC++) implementation. We then provide examples to show how IAM is more convenient and more readily supports efficient algorithm development. Additionally, we show how image algebra and IAM can be employed in compression algorithm development and analysis.
NASA Astrophysics Data System (ADS)
Konnik, Mikhail V.; Welsh, James
2012-09-01
Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.
Design of a program in Matlab environment for gamma spectrum analysis of geological samples
NASA Astrophysics Data System (ADS)
Rojas, M.; Correa, R.
2016-05-01
In this work we present the analysis of gamma ray spectra Ammonites found in different places. One of the fossils was found near the city of Cusco (Perú) and the other in “Cajón del Maipo” in Santiago (Chile). Spectra were taken with a hyperpure germanium detector (HPGe) in an environment cooled with liquid nitrogen, with the technique of high-resolution gamma spectroscopy. A program for automatic detection and classifying of the samples was developed in Matlab. It program has the advantage of being able to make direct interventions or generalize it even more, or make it automate for specific spectra and make comparison between them. For example it can calibrate the spectrum automatically, only by giving the calibration spectrum, without the necessity of putting them. Finally, it also erases the external noise.
Aerial image simulation for partial coherent system with programming development in MATLAB
NASA Astrophysics Data System (ADS)
Hasan, Md. Nazmul; Rahman, Md. Momtazur; Udoy, Ariful Banna
2014-10-01
Aerial image can be calculated by either Abbe's method or sum of coherent system decomposition (SOCS) method for partial coherent system. This paper introduces a programming with Matlab code that changes the analytical representation of Abbe's method to the matrix form, which has advantages for both Abbe's method and SOCS since matrix calculation is easier than double integration over object plane or pupil plane. First a singular matrix P is derived from a pupil function and effective light source in the spatial frequency domain. By applying Singular Value Decomposition (SVD) to the matrix P, eigenvalues and eigenfunctions are obtained. The aerial image can then be computed by the eigenvalues and eigenfunctions without calculation of Transmission Cross Coefficient (TCC). The aerial final image is almost identical as an original cross mask and the intensity distribution on image plane shows that it is almost uniform across the linewidth of the mask.
How to get students to love (or not hate) MATLAB and programming
NASA Astrophysics Data System (ADS)
Reckinger, Shanon; Reckinger, Scott
2014-11-01
An effective programming course geared toward engineering students requires the utilization of modern teaching philosophies. A newly designed course that focuses on programming in MATLAB involves flipping the classroom and integrating various active teaching techniques. Vital aspects of the new course design include: lengthening in-class contact hours, Process-Oriented Guided Inquiry Learning (POGIL) method worksheets (self-guided instruction), student created video content posted on YouTube, clicker questions (used in class to practice reading and debugging code), programming exams that don't require computers, integrating oral exams into the classroom, fostering an environment for formal and informal peer learning, and designing in a broader theme to tie together assignments. However, possibly the most important piece to this programming course puzzle: the instructor needs to be able to find programming mistakes very fast and then lead individuals and groups through the steps to find their mistakes themselves. The effectiveness of the new course design is demonstrated through pre- and post- concept exam results and student evaluation feedback. Students reported that the course was challenging and required a lot of effort, but left largely positive feedback.
MDA: a MATLAB-based program for morphospace-disparity analysis
NASA Astrophysics Data System (ADS)
Navarro, Nicolas
2003-06-01
A MATLAB ® program that examines patterns of state-space occupation is described. Four subroutines are available with which to visualize morphospace patterns: (i) in terms of their features such as dispersion, aggregation and location, thereby allowing users to extract complementary quantitative information about how the state-space is structured, and (ii) in terms of changes in those patterns that can be compared with other biotic (e.g., extinction, origination rates) or abiotic (e.g., environmental proxy) information. The program incorporates many of the latest and most widely used statistical parameters for describing multivariate spaces. The parameters are estimated on the basis of bootstrap resampling or bootstrap rarefaction procedures. Applications based on stochastic simulation of the evolution of monophyletic clade (using m-file contained in the help folder of the MDA program) are presented so as to illustrate the program's various options. The versatility of MDA allows the most interesting patterns to be extracted rapidly from data and the program to be applied readily to a wide range of state-space problems.
2-D Modeling of Energy-z Beam Dynamics Using the LiTrack Matlab Program
Cauley, S.K.; Woods, M.; /SLAC
2005-12-15
Short bunches and the bunch length distribution have important consequences for both the LCLS project at SLAC and the proposed ILC project. For both these projects, it is important to simulate what bunch length distributions are expected and then to perform actual measurements. The goal of the research is to determine the sensitivity of the bunch length distribution to accelerator phase and voltage. This then indicates the level of control and stability that is needed. In this project I simulated beamlines to find the rms bunch length in three different beam lines at SLAC, which are the test beam to End Station A (ILC-ESA) for the ILC studies, Linac Coherent Light Source (LCLS) and LCLS-ESA. To simulate the beamlines, I used the LiTrack program, which does a 2-dimensional tracking of an electron bunch's longitudinal (z) and the energy spread beam (E) parameters. In order to reduce the time of processing the information, I developed a small program to loop over adjustable machine parameters. LiTrack is a Matlab script and Matlab is also used for plotting and saving and loading files. The results show that the LCLS in Linac-A is the most sensitive when looking at the ratio of change in phase degree to rate of change. The results also show a noticeable difference between the LCLS and LCLS-ESA, which suggest that further testing should go into looking the Beam Switch Yard and End Station A to determine why the result of the LCLS and LCLS-ESA vary.
NASA Astrophysics Data System (ADS)
Charsooghi, Mohammad A.; Akhlaghi, Ehsan A.; Tavaddod, Sharareh; Khalesifard, H. R.
2011-02-01
We developed a graphical user interface, MATLAB based program to calculate the translational diffusion coefficients in three dimensions for a single diffusing particle, suspended inside a fluid. When the particles are not spherical, in addition to their translational motion also a rotational freedom is considered for them and in addition to the previous translational diffusion coefficients a planar rotational diffusion coefficient can be calculated in this program. Time averaging and ensemble averaging over the particle displacements are taken to calculate the mean square displacement variations in time and so the diffusion coefficients. To monitor the random motion of non-spherical particles a reference frame is used that the particle just have translational motion in it. We call it the body frame that is just like the particle rotates about the z-axis of the lab frame. Some statistical analysis, such as velocity autocorrelation function and histogram of displacements for the particle either in the lab or body frames, are available in the program. Program also calculates theoretical values of the diffusion coefficients for particles of some basic geometrical shapes; sphere, spheroid and cylinder, when other diffusion parameters like temperature and fluid viscosity coefficient can be adjusted. Program summaryProgram title: KOJA Catalogue identifier: AEHK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 021 No. of bytes in distributed program, including test data, etc.: 1 310 320 Distribution format: tar.gz Programming language: MatLab (MathWorks Inc.) version 7.6 or higher. Statistics Toolbox and Curve Fitting Toolbox required. Computer: Tested on windows and linux, but generally it would work on any
NASA Astrophysics Data System (ADS)
Pašteka, R.; Karcol, R.; Kušnirák, D.; Mojzeš, A.
2012-12-01
Downward continuation of potential fields is a powerful, but very unstable tool used in the processing and interpretation of geophysical data sets. Treatment of the instability problem has been realized by various authors in different ways. The Tikhonov regularization approach is one of the most robust. It is based on a low-pass filter derivation in the Fourier spectral domain, by means of a minimization problem solution. We highlight the most important characteristics from its theoretical background and present its realization in the form of a Matlab-based program. The optimum regularization parameter value is selected as a local minimum of constructed Lp-norms functions—in the majority of cases, the C-norms give the best results. We demonstrate very good stabilizing properties of this method on several synthetic models and one real-world example from high-definition magnetometry. The main output of the proposed software solution is the estimation of the depth to source below the potential field measurement level.
TEXTNN—A MATLAB program for textural classification using neural networks
NASA Astrophysics Data System (ADS)
Leite, Emilson Pereira; de Souza Filho, Carlos Roberto
2009-10-01
A new MATLAB code that provides tools to perform classification of textural images for applications in the geosciences is presented in this paper. The program, here coined as textural neural network (TEXTNN), comprises the computation of variogram maps in the frequency domain for specific lag distances in the neighborhood of a pixel. The result is then converted back to spatial domain, where directional or omni-directional semivariograms are extracted. Feature vectors are built with textural information composed of semivariance values at these lag distances and, moreover, with histogram measures of mean, standard deviation and weighted-rank fill ratio. This procedure is applied to a selected group of pixels or to all pixels in an image using a moving window. A feed-forward back-propagation neural network can then be designed and trained on feature vectors of predefined classes (training set). The training phase minimizes the mean-squared error on the training set. Additionally, at each iteration, the mean-squared error for every validation is assessed and a test set is evaluated. The program also calculates contingency matrices, global accuracy and kappa coefficient for the training, validation and test sets, allowing a quantitative appraisal of the predictive power of the neural network models. The interpreter is able to select the best model obtained from a k-fold cross-validation or to use a unique split-sample dataset for classification of all pixels in a given textural image. The performance of the algorithms and the end-user program were tested using synthetic images, orbital synthetic aperture radar (SAR) (RADARSAT) imagery for oil-seepage detection, and airborne, multi-polarized SAR imagery for geologic mapping, and the overall results are considered quite positive.
NASA Astrophysics Data System (ADS)
Raju, Lakshmi
2014-03-01
The objective of this project was to develop a low cost infrared spectrophotometer to measure terrestrial or extraterrestrial water vapor and to create a Matlab program to analyze the absorption data. Narrow bandwidth infrared filters of 940 nm and 1000 nm were used to differentially detect absorption due to vibrational frequency of water vapor. Light travelling through a collimating tube with varying humidity was allowed to pass through respective filters. The intensity of exiting light was measured using a silicon photodiode connected to a multimeter and a laptop with Matlab program. Absorption measured (decrease in voltage) using the 940nm filter was significantly higher with increasing humidity (p less than 0.05) demonstrating that the instrument can detect and relatively quantify water vapor. A Matlab program was written to comparatively graph absorption data. In conclusion, a novel, low cost infrared spectrophotometer was successfully created to detect water vapor and serves as a prototype to detect water on the moon. This instrument can also assist in teaching and learning spectrophotometry.
Yang, X.
1998-12-31
Modeling ground motions from multi-shot, delay-fired mining blasts is important to the understanding of their source characteristics such as spectrum modulation. MineSeis is a MATLAB{reg_sign} (a computer language) Graphical User Interface (GUI) program developed for the effective modeling of these multi-shot mining explosions. The program provides a convenient and interactive tool for modeling studies. Multi-shot, delay-fired mining blasts are modeled as the time-delayed linear superposition of identical single shot sources in the program. These single shots are in turn modeled as the combination of an isotropic explosion source and a spall source. Mueller and Murphy`s (1971) model for underground nuclear explosions is used as the explosion source model. A modification of Anandakrishnan et al.`s (1997) spall model is developed as the spall source model. Delays both due to the delay-firing and due to the single-shot location differences are taken into account in calculating the time delays of the superposition. Both synthetic and observed single-shot seismograms can be used to construct the superpositions. The program uses MATLAB GUI for input and output to facilitate user interaction with the program. With user provided source and path parameters, the program calculates and displays the source time functions, the single shot synthetic seismograms and the superimposed synthetic seismograms. In addition, the program provides tools so that the user can manipulate the results, such as filtering, zooming and creating hard copies.
MATLAB-Based Program for Teaching Autocorrelation Function and Noise Concepts
ERIC Educational Resources Information Center
Jovanovic Dolecek, G.
2012-01-01
An attractive MATLAB-based tool for teaching the basics of autocorrelation function and noise concepts is presented in this paper. This tool enhances traditional in-classroom lecturing. The demonstrations of the tool described here highlight the description of the autocorrelation function (ACF) in a general case for wide-sense stationary (WSS)…
Sobie, Eric A
2011-09-20
This two-part lecture introduces students to the scientific computing language MATLAB. Prior computer programming experience is not required. The lectures present basic concepts of computer programming logic that tend to cause difficulties for beginners in addition to concepts that relate specifically to the MATLAB language syntax. The lectures begin with a discussion of vectors, matrices, and arrays. Because many types of biological data, such as fluorescence images and DNA microarrays, are stored as two-dimensional objects, processing these data is a form of array manipulation, and MATLAB is especially adept at handling such array objects. The students are introduced to basic commands in MATLAB, as well as built-in functions that provide useful shortcuts. The second lecture focuses on the differences between MATLAB scripts and MATLAB functions and describes when one method of programming organization might be preferable to the other. The principles are illustrated through the analysis of experimental data, specifically measurements of intracellular calcium concentration in live cells obtained using confocal microscopy. PMID:21934110
Calculus Demonstrations Using MATLAB
ERIC Educational Resources Information Center
Dunn, Peter K.; Harman, Chris
2002-01-01
The note discusses ways in which technology can be used in the calculus learning process. In particular, five MATLAB programs are detailed for use by instructors or students that demonstrate important concepts in introductory calculus: Newton's method, differentiation and integration. Two of the programs are animated. The programs and the…
NASA Astrophysics Data System (ADS)
Dattani, Nikesh S.
2013-12-01
This MATLAB program calculates the dynamics of the reduced density matrix of an open quantum system modeled either by the Feynman-Vernon model or the Caldeira-Leggett model. The user gives the program a Hamiltonian matrix that describes the open quantum system as if it were in isolation, a matrix of the same size that describes how that system couples to its environment, and a spectral distribution function and temperature describing the environment’s influence on it, in addition to the open quantum system’s initial density matrix and a grid of times. With this, the program returns the reduced density matrix of the open quantum system at all moments specified by that grid of times (or just the last moment specified by the grid of times if the user makes this choice). This overall calculation can be divided into two stages: the setup of the Feynman integral, and the actual calculation of the Feynman integral for time propagation of the density matrix. When this program calculates this propagation on a multi-core CPU, it is this propagation that is usually the rate-limiting step of the calculation, but when it is calculated on a GPU, the propagation is calculated so quickly that the setup of the Feynman integral can actually become the rate-limiting step. The overhead of transferring information from the CPU to the GPU and back seems to have a negligible effect on the overall runtime of the program. When the required information cannot fit on the GPU, the user can choose to run the entire program on a CPU. Catalogue identifier: AEPX_v1_0. Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPX_v1_0.html. Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html. No. of lines in distributed program, including test data, etc.: 703. No. of bytes in distributed program, including test data, etc.: 11026. Distribution format: tar.gz. Programming
NASA Astrophysics Data System (ADS)
Monnet, Claude; Bouchet, Stéphane; Thiry-Bastien, Philippe
2003-11-01
The three-dimensional reconstruction of basin sediments has become a major topic in earth sciences and is now a necessary step for modeling and understanding the depositional context of sediments. Because data are generally scattered, the construction of any irregular, continuous surface involves the interpolation of a large number of points over a regular grid. However, interpolation is a highly technical specialty that is still somewhat of a black art for most people. The lack of multi-platform contouring software that is easy to use, fast and automatic, without numerous abstruse parameters, motivated the programming of a software, called ISOPAQ. This program is an interactive desktop tool for spatial analysis, interpolation and display (location, contour and surface mapping) of earth science data, especially stratigraphic data. It handles four-dimensional data sets, where the dimensions are usually longitude, latitude, thickness and time, stored in a single text file. The program uses functions written for the MATLAB ® software. Data are managed by the means of a user-friendly graphical interface, which allows the user to interpolate and generate maps for stratigraphic analyses. This program can process and compare several interpolation methods (nearest neighbor, linear and cubic triangulations, inverse distance and surface splines) and some stratigraphic treatments, such as the decompaction of sediments. Moreover, the window interface helps the user to easily change some parameters like coordinates, grid cell size, and equidistance of contour lines and scale between files. Primarily developed for non-specialists of interpolation thanks to the graphical user interface, practitioners can also easily append the program with their own functions, since it is written in MATLAB open language. As an example, the program is applied here to the Bajocian stratigraphic sequences of eastern France.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. PMID:25448021
NASA Astrophysics Data System (ADS)
Vasant, P.; Ganesan, T.; Elamvazuthi, I.
2012-11-01
A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.
ERIC Educational Resources Information Center
Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.
2006-01-01
This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…
Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János
2016-04-01
Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. PMID:27003481
Portmann, Greg; Safranek, James; Huang, Xiaobiao; /SLAC
2011-10-18
The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRAN code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.
NASA Astrophysics Data System (ADS)
Comer, R. P.; Lawton, C.; Yale, M. M.
2001-05-01
Since 1998, MATLAB has supported HDF-EOS, the Earth Observing System (EOS) extension to the Hierarchical Data Format (HDF). MATLAB users can access, process, or view HDF-EOS data sets or to construct new HDF-EOS products. They can work interactively or use MATLAB as a high-level programming language, and use the MATLAB Image Processing or Mapping Toolboxes. MATLAB 6.0, released in November 2000, incorporates the latest HDF 4 (Version 4.1r3) and HDF-EOS (Version 2.5v1) libraries. MATLAB provides a family of functions that parallel the C and Fortran application programmer interfaces (APIs) provided by the NCSA HDF and NASA HDF-EOS libraries. These functions enable full access to HDF-EOS data sets, via either interactive exploration or MATLAB programs (M-files). HDF and HDF-EOS data files can be read into or written from a MATLAB workspace. API-level functions in MATLAB include HDFPT, HDFSW, and HDFGD for interfaces to HDF-EOS point, swath, or grid objects, respectively. Both high level functions and a graphical user interface (GUI) are planned for future releases. Prototypes of high level functions (HDFINFO and HDFREAD) have already been developed and successfully demonstrated on HDF-EOS data sets from the Moderate Resolution Imaging Spectroradiometer (MODIS) on board NASA's Terra (EOS AM-1) satellite and HDF data sets from Landsat 7. (R)MATLAB is a registered trademark of The MathWorks, Inc.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Franco, E L; Simons, A R
1986-05-01
Two programs are described for the emulation of the dynamics of Reed-Frost progressive epidemics in a handheld programmable calculator (HP-41C series). The programs provide a complete record of cases, susceptibles, and immunes at each epidemic period using either the deterministic formulation or the trough analogue of the mechanical model for the stochastic version. Both programs can compute epidemics that include a constant rate of influx or outflux of susceptibles and single or double infectivity time periods. PMID:3962973
Friedman, R H; Frank, A D
1983-08-01
A rule-based computer system was developed to perform clinical decision-making support within a medical information system, oncology practice, and clinical research. This rule-based system, which has been programmed using deterministic rules, possesses features of generalizability, modularity of structure, convenience in rule acquisition, explanability, and utility for patient care and teaching, features which have been identified as advantages of artificial intelligence (AI) rule-based systems. Formal rules are primarily represented as conditional statements; common conditions and actions are stored in system dictionaries so that they can be recalled at any time to form new decision rules. Important similarities and differences exist in the structure of this system and clinical computer systems utilizing artificial intelligence (AI) production rule techniques. The non-AI rule-based system possesses advantages in cost and ease of implementation. The degree to which significant medical decision problems can be solved by this technique remains uncertain as does whether the more complex AI methodologies will be required. PMID:6352165
NASA Astrophysics Data System (ADS)
Smiljanić, J.; Žeželj, M.; Milanović, V.; Radovanović, J.; Stanković, I.
2014-03-01
A strong magnetic field applied along the growth direction of a quantum cascade laser (QCL) active region gives rise to a spectrum of discrete energy states, the Landau levels. By combining quantum engineering of a QCL with a static magnetic field, we can selectively inhibit/enhance non-radiative electron relaxation process between the relevant Landau levels of a triple quantum well and realize a tunable surface emitting device. An efficient numerical algorithm implementation is presented of optimization of GaAs/AlGaAs QCL region parameters and calculation of output properties in the magnetic field. Both theoretical analysis and MATLAB implementation are given for LO-phonon and interface roughness scattering mechanisms on the operation of QCL. At elevated temperatures, electrons in the relevant laser states absorb/emit more LO-phonons which results in reduction of the optical gain. The decrease in the optical gain is moderated by the occurrence of interface roughness scattering, which remains unchanged with increasing temperature. Using the calculated scattering rates as input data, rate equations can be solved and population inversion and the optical gain obtained. Incorporation of the interface roughness scattering mechanism into the model did not create new resonant peaks of the optical gain. However, it resulted in shifting the existing peaks positions and overall reduction of the optical gain. Catalogue identifier: AERL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERL_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 37763 No. of bytes in distributed program, including test data, etc.: 2757956 Distribution format: tar.gz Programming language: MATLAB. Computer: Any capable of running MATLAB version R2010a or higher. Operating system: Any platform
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
2006-08-03
This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).
NASA Astrophysics Data System (ADS)
Ingeman-Nielsen, Thomas; Baumgartner, François
2006-11-01
We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.
NASA Astrophysics Data System (ADS)
Trefan, Gyorgy
1993-01-01
The goal of this thesis is to contribute to the ambitious program of the foundation of developing statistical physics using chaos. We build a deterministic model of Brownian motion and provide a microscopic derivation of the Fokker-Planck equation. Since the Brownian motion of a particle is the result of the competing processes of diffusion and dissipation, we create a model where both diffusion and dissipation originate from the same deterministic mechanism--the deterministic interaction of that particle with its environment. We show that standard diffusion which is the basis of the Fokker-Planck equation rests on the Central Limit Theorem, and, consequently, on the possibility of deriving it from a deterministic process with a quickly decaying correlation function. The sensitive dependence on initial conditions, one of the defining properties of chaos insures this rapid decay. We carefully address the problem of deriving dissipation from the interaction of a particle with a fully deterministic nonlinear bath, that we term the booster. We show that the solution of this problem essentially rests on the linear response of a booster to an external perturbation. This raises a long-standing problem concerned with Kubo's Linear Response Theory and the strong criticism against it by van Kampen. Kubo's theory is based on a perturbation treatment of the Liouville equation, which, in turn, is expected to be totally equivalent to a first-order perturbation treatment of single trajectories. Since the boosters are chaotic, and chaos is essential to generate diffusion, the single trajectories are highly unstable and do not respond linearly to weak external perturbation. We adopt chaotic maps as boosters of a Brownian particle, and therefore address the problem of the response of a chaotic booster to an external perturbation. We notice that a fully chaotic map is characterized by an invariant measure which is a continuous function of the control parameters of the map
NASA Astrophysics Data System (ADS)
Umansky, Moti; Weihs, Daphne
2012-08-01
In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program
Parallelizing AT with MatlabMPI
Li, Evan Y.; /Brown U. /SLAC
2011-06-22
The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while
Hendrickx, Pieter MS; Martins, José C
2008-01-01
Background The advent of combinatorial chemistry has revived the interest in five-membered heterocyclic rings as scaffolds in pharmaceutical research. They are also the target of modifications in nucleic acid chemistry. Hence, the characterization of their conformational features is of considerable interest. This can be accomplished from the analysis of the 3JHH scalar coupling constants. Results A freely available program including an easy-to-use graphical user interface (GUI) has been developed for the calculation of five-membered ring conformations from scalar coupling constant data. A variety of operational modes and parameterizations can be selected by the user, and the coupling constants and electronegativity parameters can be defined interactively. Furthermore, the possibility of generating high-quality graphical output of the conformational space accessible to the molecule under study facilitates the interpretation of the results. These features are illustrated via the conformational analysis of two 4'-thio-2'-deoxynucleoside analogs. Results are discussed and compared with those obtained using the original PSEUROT program. Conclusion A user-friendly Matlab interface has been developed and tested. This should considerably improve the accessibility of this kind of calculations to the chemical community. PMID:18950513
NASA Astrophysics Data System (ADS)
Akgun, A.; Sezer, E. A.; Nefeslioglu, H. A.; Gokceoglu, C.; Pradhan, B.
2012-01-01
In this study, landslide susceptibility mapping using a completely expert opinion-based approach was applied for the Sinop (northern Turkey) region and its close vicinity. For this purpose, an easy-to-use program, "MamLand," was developed for the construction of a Mamdani fuzzy inference system and employed in MATLAB. Using this newly developed program, it is possible to construct a landslide susceptibility map based on expert opinion. In this study, seven conditioning parameters characterising topographical, geological, and environmental conditions were included in the FIS. A landslide inventory dataset including 351 landslide locations was obtained for the study area. After completing the data production stage of the study, the data were processed using a soft computing approach, i.e., a Mamdani-type fuzzy inference system. In this system, only landslide conditioning data were assessed, and landslide inventory data were not included in the assessment approach. Thus, a file depicting the landslide susceptibility degrees for the study area was produced using the Mamdani FIS. These degrees were then exported into a GIS environment, and a landslide susceptibility map was produced and assessed in point of statistical interpretation. For this purpose, the obtained landslide susceptibility map and the landslide inventory data were compared, and an area under curve (AUC) obtained from receiver operating characteristics (ROC) assessment was carried out. From this assessment, the AUC value was found to be 0.855, indicating that this landslide susceptibility map, which was produced in a data-independent manner, was successful.
Yang, X.
1998-04-01
Large scale (up to 5 kt) chemical blasts are routinely conducted by mining and quarry industries around the world to remove overburden or to fragment rocks. Because of their ability to trigger the future International Monitoring System (IMS) of the Comprehensive Test Ban Treaty (CTBT), these blasts are monitored and studied by verification seismologists for the purpose of discriminating them from possible clandestine nuclear tests. One important component of these studies is the modeling of ground motions from these blasts with theoretical and empirical source models. The modeling exercises provide physical bases to regional discriminants and help to explain the observed signal characteristics. The program MineSeis has been developed to implement the synthetic seismogram modeling of multi-shot blast sources with the linear superposition of single shot sources. Single shot sources used in the modeling are the spherical explosion plus spall model mentioned here. Mueller and Murphy`s (1971) model is used as the spherical explosion model. A modification of Anandakrishnan et al.`s (1997) spall model is developed for the spall component. The program is implemented with the MATLAB{reg_sign} Graphical User Interface (GUI), providing the user with easy, interactive control of the calculation.
Accelerator Toolbox for MATLAB
Terebilo, Andrei
2001-05-29
This paper introduces Accelerator Toolbox (AT)--a collection of tools to model particle accelerators and beam transport lines in the MATLAB environment. At SSRL, it has become the modeling code of choice for the ongoing design and future operation of the SPEAR 3 synchrotron light source. AT was designed to take advantage of power and simplicity of MATLAB--commercially developed environment for technical computing and visualization. Many examples in this paper illustrate the advantages of the AT approach and contrast it with existing accelerator code frameworks.
Safranek, James
2002-08-23
The storage ring linear optics debugging code LOCO (Linear Optics from Closed Orbits)[1] has been rewritten in MATLAB and linked to the accelerator modeling code AT [2]. LOCO uses the measured orbit response matrix to determine normal and skew quadrupole gradients. A MATLAB GUI provides a greatly improved user interface with graphical display of the fitting results. The option of including the shift in orbit with rf-frequency in the orbit response matrix has been added so that the model is adjusted to match the measured dispersion. This facilitates control of the horizontal dispersion, which is important for achieving small horizontal emittance. Also included are error bar calculation, outlier data rejection, accommodation of single-view BPMs (beam position monitors), and the option of including coupling in the fit. The code was written to allow the flexibility of linking it to other accelerator modeling codes.
An Accelerator Control Middle Layer Using MATLAB
Portmann, Gregory J.; Corbett, Jeff; Terebilo, Andrei
2005-05-15
Matlab is an interpretive programming language originally developed for convenient use with the LINPACK and EISPACK libraries. Matlab is appealing for accelerator physics because it is matrix-oriented, provides an active workspace for system variables, powerful graphics capabilities, built-in math libraries, and platform independence. A number of accelerator software toolboxes have been written in Matlab -- the Accelerator Toolbox (AT) for model-based machine simulations, LOCO for on-line model calibration, and Matlab Channel Access (MCA) to connect with EPICS. The function of the MATLAB ''MiddleLayer'' is to provide a scripting language for machine simulations and on-line control, including non-EPICS based control systems. The MiddleLayer has simplified and streamlined development of high-level applications including configuration control, energy ramp, orbit correction, photon beam steering, ID compensation, beam-based alignment, tune correction and response matrix measurement. The database-driven Middle Layer software is largely machine-independent and easy to port. Six accelerators presently use the software package with more scheduled to come on line soon.
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
An Accelerator Control Middle Layer Using MATLAB
Portmann, Gregory J.; Corbett, Jeff; Terebilo, Andrei
2005-03-15
Matlab is a matrix manipulation language originally developed to be a convenient language for using the LINPACK and EISPACK libraries. What makes Matlab so appealing for accelerator physics is the combination of a matrix oriented programming language, an active workspace for system variables, powerful graphics capability, built-in math libraries, and platform independence. A number of software toolboxes for accelerators have been written in Matlab--the Accelerator Toolbox (AT) for machine simulations, LOCO for accelerator calibration, Matlab Channel Access Toolbox (MCA) for EPICS connections, and the Middle Layer. This paper will describe the ''middle layer'' software toolbox that resides between the high-level control applications and the low-level accelerator control system. This software was a collaborative effort between ALS (LBNL) and SPEAR3 (SSRL) but easily ports to other machines. Five accelerators presently use this software. The high-level Middle Layer functionality includes energy ramp, configuration control (save/restore), global orbit correction, local photon beam steering, insertion device compensation, beam-based alignment, tune correction, response matrix measurement, and script-based programs for machine physics studies.
Matlab Cluster Ensemble Toolbox
Sapio, Vincent De; Kegelmeyer, Philip
2009-04-27
This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. With regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.
Matpar: Parallel Extensions for MATLAB
NASA Technical Reports Server (NTRS)
Springer, P. L.
1998-01-01
Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.
Matlab Cluster Ensemble Toolbox
2009-04-27
This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. Withmore » regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.« less
Deterministic hierarchical networks
NASA Astrophysics Data System (ADS)
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
A deterministic discrete ordinates transport proxy application
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
NASA Astrophysics Data System (ADS)
Gómez-Ortiz, David; Agarwal, Bhrigu N. P.
2005-05-01
A MATLAB source code 3DINVER.M is described to compute 3D geometry of a horizontal density interface from gridded gravity anomaly by Parker-Oldenburg iterative method. This procedure is based on a relationship between the Fourier transform of the gravity anomaly and the sum of the Fourier transform of the interface topography. Given the mean depth of the density interface and the density contrast between the two media, the three-dimensional geometry of the interface is iteratively calculated. The iterative process is terminated when either the RMS error between two successive approximations is lower than a pre-assigned value—used as convergence criterion, or until a pre-assigned maximum number of iterations is reached. A high-cut filter in the frequency domain has been incorporated to enhance the convergence in the iterative process. The algorithm is capable of handling large data sets requiring direct and inverse Fourier transforms effectively. The inversion of a gravity anomaly over Brittany (France) is presented to compute the Moho depth as a practical example.
NASA Astrophysics Data System (ADS)
Ricard, Ludovic P.; Chanu, Jean-Baptiste
2013-08-01
The evaluation of potential and resources during geothermal exploration requires accurate and consistent temperature characterization and modelling of the sub-surface. Existing interpretation and modelling approaches of 1D temperature measurements are mainly focusing on vertical heat conduction with only few approaches that deals with advective heat transport. Thermal regimes are strongly correlated to rock and fluid properties. Currently, no consensus exists for the identification of the thermal regime and the analysis of such dataset. We developed a new framework allowing the identification of thermal regimes by rock formations, the analysis and modelling of wireline logging and discrete temperature measurements by taking into account the geological, geophysical and petrophysics data. This framework has been implemented in the GeoTemp software package that allows the complete thermal characterization and modelling at the formation scale and that provides a set of standard tools for the processing wireline and discrete temperature data. GeoTempTM operates via a user friendly graphical interface written in Matlab that allows semi-automatic calculation, display and export of the results. Output results can be exported as Microsoft Excel spreadsheets or vector graphics of publication quality. GeoTemp™ is illustrated here with an example geothermal application from Western Australia and can be used for academic, teaching and professional purposes.
NASA Astrophysics Data System (ADS)
Ghorbani, Ahmad; Camerlynck, Christian; Florsch, Nicolas
2009-02-01
An inversion code has been constructed using Matlab, to recover 1D parameters of the Cole-Cole model from spectral induced polarization data. In a spectral induced polarization survey, impedances are recorded at various frequencies. Both induced polarization and electromagnetic coupling effects occur simultaneously over the experimental frequency bandwidth, and these become progressively more dominant when the frequency increases. We used the CR1Dmod code published by Ingeman-Nielsen and Baumgartner [2006]. This code solves for electromagnetic responses, in the presence of complex resistivity effects in a 1D Earth. In this paper, a homotopy method has been designed by the authors to overcome the local convergence problem of normal iterative methods. In addition, in order to further condition the inverse problem, we incorporated standard Gauss-Newton (or quasi-Newton) methods. Graphical user interfaces enable straightforward entering of the data and the a priori model, as well as the cable configuration. Two synthetic examples are presented, showing that the spectral parameters can be recovered from multifrequency, complex resistivity data.
OMPC: an Open-Source MATLAB-to-Python Compiler.
Jurica, Peter; van Leeuwen, Cees
2009-01-01
Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com. PMID:19225577
Deterministic Bilinear System Identification
NASA Astrophysics Data System (ADS)
Lee, Cheh-Han; Juang, Jer-Nan
2013-12-01
A unified identification method is proposed for system realization of a deterministic continuous-time/discrete-time bilinear models from input and output measurement data. A generalized Hankel matrix is formed with the output measurements obtained by applying a set of repeated input sequences to a bilinear system. A computational procedure is developed to extract a time varying discrete-time state-space model from the generalized Hankel matrix. The bilinear system models are realized by transforming the identified time varying discrete-time model to the bilinear models. Numerical simulations are given to show the effectiveness of the proposed identification method.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Documentation generator application for MatLab source codes
NASA Astrophysics Data System (ADS)
Niton, B.; Pozniak, K. T.; Romaniuk, R. S.
2011-06-01
The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a realization of an application for documenting MatLab source codes. There are presented own novel solution based on Doxygen program which is available on the free license, with accessible source code. The used supporting tools for parser building were Bison and Flex. There are presented the practical results of the documentation generator. The program was applied for exemplary MatLab codes. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part two which describes the MatLab application. MatLab is used for description of the measured phenomena.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
MatSeis: A Seismic toolbox for MATLAB
Harris, J.M.; Young, C.J.
1996-08-01
To support the signal processing and data visualization needs of CTBT related projects at SNL, a MATLAB based GUI was developed. This program is known as MatSeis. MatSeis was developed quickly using the available MATLAB functionality. It provides a time-distance profile plot integrating origin, waveform, travel-time, and arrival data. Graphical plot controls, data manipulation, and signal processing functions provide a user friendly seismic analysis package. In addition, the full power of MATLAB (the premier tool for general numeric processing and visualization) is available for prototyping new functions by end users. This package is being made available to the seismic community in the hope that it will aid CTBT research and will facilitate cooperative signal processing development. 2 refs., 5 figs.
Deterministic methods in radiation transport
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
MATLAB-Based VHDL Development Environment
Katko, K. K.; Robinson, S. H.
2002-01-01
The Reconfigurable Computing program at Los Alamos National Laboratory (LANL) required synthesizable VHDL Fast Fourier Transform (FFT) designs that could be quickly implemented into FPGA-based high speed Digital Signal Processing architectures. Several different FFTs were needed for the different systems. As a result, the MATLAB-Based VHDL Development Environment was developed so that with a small amount of work and forethought, arbitrarily sized FFTs with different bit-width parameters could be produced quickly from one VHDL generating algorithm. The result is highly readable VHDL that can be modified quickly via the generating function to adapt to new algorithmic requirements. Several additional capabilities are integrated into the development environment. These capabilities include a bit-true parameterized mathematical model, fixed-point design validation, test vector generation, VHDL design verification, and chip resource use estimation. LANL needed the flexibility to build a wide variety of FFTs with a quick turn around time. It was important to have an effective way of trading off size, speed and precision. The FFTs also needed to be efficiently implemented into our existing FPGA-based architecture. Reconfigurable computing systems at LANL have been designed to accept two or four inputs on each clock. This allows the data processing rate to be reduced to a more manageable speed. This approach, however, limits us from using existing FFT cores. A MATLAB-Based VHDL Development Environment (MBVDE) was created in response to our FFT needs. MBVDE provides more flexibility than is available with VHDL. The technique allows new designs to be implemented and verified quickly. In addition, analysis tools are incorporated to evaluate trade-offs. MBVDE incorporates the performance of VHDL, the fast design time of core generation, and the benefit of not having to know VHDL available with C-tools into one environment. The MBVDE approach is not a comprehensive solution, but
Using Matlab in a Multivariable Calculus Course.
ERIC Educational Resources Information Center
Schlatter, Mark D.
The benefits of high-level mathematics packages such as Matlab include both a computer algebra system and the ability to provide students with concrete visual examples. This paper discusses how both capabilities of Matlab were used in a multivariate calculus class. Graphical user interfaces which display three-dimensional surfaces, contour plots,…
Deterministic multidimensional nonuniform gap sampling
NASA Astrophysics Data System (ADS)
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
Sparse Matrices in MATLAB: Design and Implementation
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Moler, Cleve; Schreiber, Robert
1992-01-01
The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.
Remarks on parallel computations in MATLAB environment
NASA Astrophysics Data System (ADS)
Opalska, Katarzyna; Opalski, Leszek
2013-10-01
The paper attempts to summarize author's investigation of parallel computation capability of MATLAB environment in solving large ordinary differential equations (ODEs). Two MATLAB versions were tested and two parallelization techniques: one used multiple processors-cores, the other - CUDA compatible Graphics Processing Units (GPUs). A set of parameterized test problems was specially designed to expose different capabilities/limitations of the different variants of the parallel computation environment tested. Presented results illustrate clearly the superiority of the newer MATLAB version and, elapsed time advantage of GPU-parallelized computations for large dimensionality problems over the multiple processor-cores (with speed-up factor strongly dependent on the problem structure).
Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka
2016-03-01
This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page ( http://www.helsinki.fi/psychology/groups/visualcognition/ ). PMID:25595311
MOCCASIN: converting MATLAB ODE models to SBML
Gómez, Harold F.; Hucka, Michael; Keating, Sarah M.; Nudelman, German; Iber, Dagmar; Sealfon, Stuart C.
2016-01-01
Summary: MATLAB is popular in biological research for creating and simulating models that use ordinary differential equations (ODEs). However, sharing or using these models outside of MATLAB is often problematic. A community standard such as Systems Biology Markup Language (SBML) can serve as a neutral exchange format, but translating models from MATLAB to SBML can be challenging—especially for legacy models not written with translation in mind. We developed MOCCASIN (Model ODE Converter for Creating Automated SBML INteroperability) to help. MOCCASIN can convert ODE-based MATLAB models of biochemical reaction networks into the SBML format. Availability and implementation: MOCCASIN is available under the terms of the LGPL 2.1 license (http://www.gnu.org/licenses/lgpl-2.1.html). Source code, binaries and test cases can be freely obtained from https://github.com/sbmlteam/moccasin. Contact: mhucka@caltech.edu Supplementary information: More information is available at https://github.com/sbmlteam/moccasin. PMID:26861819
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations II: Dynamics and stochastic simulations
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Duboscq, Romain
2015-08-01
GPELab is a free Matlab toolbox for modeling and numerically solving large classes of systems of Gross-Pitaevskii equations that arise in the physics of Bose-Einstein condensates. The aim of this second paper, which follows (Antoine and Duboscq, 2014), is to first present the various pseudospectral schemes available in GPELab for computing the deterministic and stochastic nonlinear dynamics of Gross-Pitaevskii equations (Antoine, et al., 2013). Next, the corresponding GPELab functions are explained in detail. Finally, some numerical examples are provided to show how the code works for the complex dynamics of BEC problems.
Deterministic relativistic quantum bit commitment
NASA Astrophysics Data System (ADS)
Adlam, Emily; Kent, Adrian
2015-06-01
We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
NASA Technical Reports Server (NTRS)
Barbieri, Enrique
2005-01-01
The Test and Engineering Directorate at NASA John C. Stennis Space Center developed an interest to study the modeling, evaluation, and control of a liquid hydrogen (LH2) and gas hydrogen (GH2) mixer subsystem of a ground test facility. This facility carries out comprehensive ground-based testing and certification of liquid rocket engines including the Space Shuttle Main engine. A software simulation environment developed in MATLAB/SIMULINK (M/S) will allow NASA engineers to test rocket engine systems at relatively no cost. In the progress report submitted in February 2004, we described the development of two foundation programs, a reverse look-up application using various interpolation algorithms, a variety of search and return methods, and self-checking methods to reduce the error in returned search results to increase the functionality of the program. The results showed that these efforts were successful. To transfer this technology to engineers who are not familiar with the M/S environment, a four-module GUI was implemented allowing the user to evaluate the mixer model under open-loop and closed-loop conditions. The progress report was based on an udergraduate Honors Thesis by Ms. Jamie Granger Austin in the Department of Electrical Engineering and Computer Science at Tulane University, during January-May 2003, and her continued efforts during August-December 2003. In collaboration with Dr. Hanz Richter and Dr. Fernando Figueroa we published these results in a NASA Tech Brief due to appear this year. Although the original proposal in 2003 did not address other components of the test facility, we decided in the last few months to extend our research and consider a related pressurization tank component as well. This report summarizes the results obtained towards a Graphical User Interface (GUI) for the evaluation and control of the hydrogen mixer subsystem model and for the pressurization tank each taken individually. Further research would combine the two
A Collection of Nonlinear Aircraft Simulations in MATLAB
NASA Technical Reports Server (NTRS)
Garza, Frederico R.; Morelli, Eugene A.
2003-01-01
Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.
SUNDIALSTB, a MATLAB Interface to SUNDIALS
Serban, R
2005-05-09
SUNDIALS [2], SUite of Nonlinear and DIfferential/ALgebraic equation Solvers, is a family of software tools for integration of ODE and DAE initial value problems and for the solution of nonlinear systems of equations. It consists of CVODE, IDA, and KINSOL, and variants of these with sensitivity analysis capabilities. SUNDIALSTB is a collection of MATLAB functions which provide interfaces to the SUNDIALS solvers. The core of each MATLAB interface in SUNDIALSTB is a single MEX file which interfaces to the various user-callable functions for that solver. However, this MEX file should not be called directly, but rather through the user-callable functions provided for each MATLAB interface. A major design principle for SUNDIALSTB was to provide an interface that is, as much as possible, equally familiar to users of both the SUNDIALS codes and MATLAB. Moreover, we tried to keep the number of user-callable functions to a minimum. For example, the CVODES MATLAB interface contains only 9 such functions, 3 of which interface solely to the adjoint sensitivity module in CVODES. In tune with the MATLAB ODESET function, optional solver inputs in SUNDIALSTB are specified through a single function (CvodeSetOptions for CVODES). However, unlike the ODE solvers in MATLAB, we have kept the more flexible SUNDIALS model in which a separate ''solve'' function (CVodeSolve for CVODES) must be called to return the solution at a desired output time. Solver statistics, as well as optional outputs (such as solution and solution derivatives at additional times) can be obtained at any time with calls to separate functions (CVodeGetStats and CVodeGet for CVODES). This document provides a complete documentation for the SUNDIALSTB functions. For additional details on the methods and underlying SUNDIALS software consult also the corresponding SUNDIALS user guides [3, 1].
Deterministic scale-free networks
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Ravasz, Erzsébet; Vicsek, Tamás
2001-10-01
Scale-free networks are abundant in nature and society, describing such diverse systems as the world wide web, the web of human sexual contacts, or the chemical network of a cell. All models used to generate a scale-free topology are stochastic, that is they create networks in which the nodes appear to be randomly connected to each other. Here we propose a simple model that generates scale-free networks in a deterministic fashion. We solve exactly the model, showing that the tail of the degree distribution follows a power law.
SAR polar format implementation with MATLAB.
Martin, Grant D.; Doerry, Armin Walter
2005-11-01
Traditional polar format image formation for Synthetic Aperture Radar (SAR) requires a large amount of processing power and memory in order to accomplish in real-time. These requirements can thus eliminate the possible usage of interpreted language environments such as MATLAB. However, with trapezoidal aperture phase history collection and changes to the traditional polar format algorithm, certain optimizations make MATLAB a possible tool for image formation. Thus, this document's purpose is two-fold. The first outlines a change to the existing Polar Format MATLAB implementation utilizing the Chirp Z-Transform that improves performance and memory usage achieving near realtime results for smaller apertures. The second is the addition of two new possible image formation options that perform a more traditional interpolation style image formation. These options allow the continued exploration of possible interpolation methods for image formation and some preliminary results comparing image quality are given.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Decline in maternal mortality in Matlab, Bangladesh: a cautionary tale.
Ronsmans, C; Vanneste, A M; Chakraborty, J; van Ginneken, J
This study examines the impact of the Maternal-Child Health and Family Planning (MCH-FP) program in the Matlab, Bangladesh. Data were obtained from the Matlab surveillance system for treatment and comparison areas. This study reports the trends in maternal mortality since 1976. The MCH-FP area received extensive services in health and family planning since 1977. Services included trained traditional birth attendants and essential obstetric care from government district hospitals and a large number of private clinics. Geographic ease of access to essential obstetric care varied across the study area. Access was most difficult in the northern sector of the MCH-FP area. Contraception was made available through family welfare centers. Tetanus immunization was introduced in 1979. Door-to-door contraceptive services were provided by 80 female community health workers on a twice-monthly basis. In 1987, a community-based maternity care program was added to existing MCH-FP services in the northern treatment area. The demographic surveillance system began collecting data in 1966. During 1976-93 there were 624 maternal deaths among women aged 15-44 years in Matlab (510/100,000 live births). 72.8% of deaths were due to direct obstetric causes: postpartum hemorrhage, induced abortion, eclampsia, dystocia, and postpartum sepsis. Maternal mortality declined in a fluctuating fashion in both treatment and comparison areas. Direct obstetric mortality declined at about 3% per year. After 1987, direct obstetric mortality declined in the north by almost 50%. After the 1990 program expansion in the south, maternal mortality declined, though not significantly, in the south. Maternal mortality declined in the south comparison area during 1987-89 and stabilized. The comparison area of the north showed no decline. PMID:9428252
NASA Astrophysics Data System (ADS)
Greene, C. A.; Bliss, A. K.; Blankenship, D. D.
2013-12-01
The Bedmap2 data suite [Fretwell et al. The Cryosphere 7,1 (2013)] contains approximately 25 million measurements of Antarctic surface elevation, ice thickness, and bed elevation which have been distilled into gridded elevations provided at 1 km horizontal resolution. We present a toolbox for Matlab to aid in the import, georeferencing, and presentation of the raster dataset provided by the Bedmap Consortium. The intent of these scripts is to give the intermediate-level Matlab user a set of simple and intuitive, yet powerful commands for Bedmap2 data access and map generation. Several examples of the utility of this toolbox are presented.
Some selected quantitative methods of thermal image analysis in Matlab.
Koprowski, Robert
2016-05-01
The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. PMID:26556680
MASCOT - MATLAB Stability and Control Toolbox
NASA Technical Reports Server (NTRS)
Kenny, Sean; Crespo, Luis
2011-01-01
MASCOT software was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental non-linear equations of motion, MASCOT then calculates vehicle trim and static stability data for any desired flight condition. Common predefined flight conditions are included. The predefined flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind and sideslip, plus three takeoff rotation conditions. Results are displayed through a unique graphical interface developed to provide stability and control information to the conceptual design engineers using a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. This software allows the user to prescribe the vehicle s CG location, mass, and inertia tensor so that any loading configuration between empty weight and maximum take-off weight can be analyzed. The required geometric and aerodynamic data as well as mass and inertia properties may be entered directly, passed through data files, or come from external programs such as Vehicle Sketch Pad (VSP). The current version of MASCOT has been tested with VSP used to compute the required data, which is then passed directly into the program. In VSP, the vehicle geometry is created and manipulated. The aerodynamic coefficients, stability and control derivatives, are calculated using VorLax, which is now available directly within VSP. MASCOT has been written exclusively using the technical computing language MATLAB . This innovation is able to bridge the gap between low-fidelity conceptual design and higher-fidelity stability and control analysis. This new tool enables the conceptual design engineer to include detailed static stability
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Survivability of Deterministic Dynamical Systems
NASA Astrophysics Data System (ADS)
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-07-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
MILAMIN 2 - Fast MATLAB FEM solver
NASA Astrophysics Data System (ADS)
Dabrowski, Marcin; Krotkiewski, Marcin; Schmid, Daniel W.
2013-04-01
MILAMIN is a free and efficient MATLAB-based two-dimensional FEM solver utilizing unstructured meshes [Dabrowski et al., G-cubed (2008)]. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. The brevity makes the code easily customizable. An important quality of MILAMIN is speed - it can handle millions of nodes within minutes on one CPU core of a standard desktop computer, and is faster than many commercial solutions. The new MILAMIN 2 allows three-dimensional modeling. It is designed as a set of functional modules that can be used as building blocks for efficient FEM simulations using MATLAB. The utilities are largely implemented as native MATLAB functions. For performance critical parts we use MUTILS - a suite of compiled MEX functions optimized for shared memory multi-core computers. The most important features of MILAMIN 2 are: 1. Modular approach to defining, tracking, and discretizing the geometry of the model 2. Interfaces to external mesh generators (e.g., Triangle, Fade2d, T3D) and mesh utilities (e.g., element type conversion, fast point location, boundary extraction) 3. Efficient computation of the stiffness matrix for a wide range of element types, anisotropic materials and three-dimensional problems 4. Fast global matrix assembly using a dedicated MEX function 5. Automatic integration rules 6. Flexible prescription (spatial, temporal, and field functions) and efficient application of Dirichlet, Neuman, and periodic boundary conditions 7. Treatment of transient and non-linear problems 8. Various iterative and multi-level solution strategies 9. Post-processing tools (e.g., numerical integration) 10. Visualization primitives using MATLAB, and VTK export functions We provide a large number of examples that show how to implement a custom FEM solver using the MILAMIN 2 framework. The examples are MATLAB scripts of increasing complexity that address a given
Parallel calculations on shared memory, NUMA-based computers using MATLAB
NASA Astrophysics Data System (ADS)
Krotkiewski, Marcin; Dabrowski, Marcin
2014-05-01
Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU
SIGNUM: A Matlab, TIN-based landscape evolution model
NASA Astrophysics Data System (ADS)
Refice, A.; Giachetta, E.; Capolongo, D.
2012-08-01
Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.
Are earthquakes deterministic or chaotic?
NASA Astrophysics Data System (ADS)
Rundle, John B.; Julian, Bruce R.; Turcotte, Donald L.
During the last decade, physicists and applied mathematicians have made substantial headway in understanding the dynamics of complex nonlinear systems. Progress has been possible due to the development of several new tools, including the renormalization group approach, phase portraits, and scaling methods (fractals). At the same time, mathematical geophysicists interested in earthquakes have begun to utilize these same concepts to generate models of faults and fractures.In order to bring these scientific communities together, it was decided to convene the workshop, Physics of Earthquake Faults: Deterministic or Chaotic?, held February 12-15, at the Asilomar conference center near Monterey, Calif. Thirty-six Earth scientists met with 15 physicists and applied mathematicians to discuss how recent advances in nonlinear systems might be applied to better understand earthquakes. Funding was provided by the Geodynamics Branch of the National Aeronautics and Space Administration, the National Science Foundation, and the Office of Basic Energy Sciences of the U.S. Department of Energy. Organizational and logistical support were provided by the U.S. Geological Survey.
Modelling of Photovoltaic Module Using Matlab Simulink
NASA Astrophysics Data System (ADS)
Afiqah Zainal, Nurul; Ajisman; Razlan Yusoff, Ahmad
2016-02-01
Photovoltaic (PV) module consists of numbers of photovoltaic cells that are connected in series and parallel used to generate electricity from solar energy. The characteristics of PV module are different based on the model and environment factors. In this paper, simulation of photovoltaic module using Matlab Simulink approach is presented. The method is used to determine the characteristics of PV module in various conditions especially in different level of irradiations and temperature. By having different values of irradiations and temperature, the results showed the output power, voltage and current of PV module can be determined. In addition, all results from Matlab Simulink are verified with theoretical calculation. This proposed model helps in better understanding of PV module characteristics in various environment conditions.
MATLAB tensor classes for fast algorithm prototyping.
Bader, Brett William; Kolda, Tamara Gibson
2004-10-01
Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.
Matlab as a robust control design tool
NASA Technical Reports Server (NTRS)
Gregory, Irene M.
1994-01-01
This presentation introduces Matlab as a tool used in flight control research. The example used to illustrate some of the capabilities of this software is a robust controller designed for a single stage to orbit air breathing vehicles's ascent to orbit. The global requirements of the controller are to stabilize the vehicle and follow a trajectory in the presence of atmospheric disturbances and strong dynamic coupling between airframe and propulsion.
ACCELERATORS: A GUI tool for beta function measurement using MATLAB
NASA Astrophysics Data System (ADS)
Chen, Guang-Ling; Tian, Shun-Qiang; Jiang, Bo-Cheng; Liu, Gui-Min
2009-04-01
The beta function measurement is used to detect the shift in the betatron tune as the strength of an individual quadrupole magnet is varied. A GUI (graphic user interface) tool for the beta function measurement is developed using the MATLAB program language in the Linux environment, which facilitates the commissioning of the Shanghai Synchrotron Radiation Facility (SSRF) storage ring. In this paper, we describe the design of the application and give some measuring results and discussions about the definition of the measurement. The program has been optimized to solve some restrictions of the AT tracking code. After the correction with LOCO (linear optics from closed orbits), the horizontal and the vertical root mean square values (rms values) can be reduced to 0.12 and 0.10.
SAR image formation toolbox for MATLAB
NASA Astrophysics Data System (ADS)
Gorham, LeRoy A.; Moore, Linda J.
2010-04-01
While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and (non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection algorithm may be successfully employed for many SAR research endeavors not involving considerably large data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image reconstruction using the matched filter and backprojection algorithms is provided.
Reproductive preferences in Matlab, Bangladesh: levels, motivation and differentials.
Razzaque, A
1996-03-01
This study provides evidence that aspirations for a smaller family and poverty both determined the reduction in family size preferences in the Matlab area of Bangladesh. Data are obtained from a variety of data sets: the 1990 Knowledge, Attitude, and Practice Survey; the 1982 Socioeconomic Survey; and the 1991 Qualitative Survey. Both treatment and nontreatment areas of Matlab experienced a fertility decline during 1976-90, from 6.9 to 3.6 children/woman in the treatment area and from 7.2 to 5.2 in the control area. In this study, multiple classification analysis and logistic regression analysis were conducted. Findings indicate that mean desired family sizes were similar in both areas and slightly higher in the treatment area. Desired family size declined during 1975-90. Most of the decline probably occurred prior to 1985. Findings from qualitative interviews indicate that most women reported that the smaller desired family size was related to the direct economic cost of children. Women also reported that family planning was now available and that in the past there were more resources for caring for large families. Mothers-in-law were open to informing their daughters-in-law about the desire for small families. This motivation for a small family among older and younger women was not present 10 years ago. Findings reveal that desired family size did not vary by age, family size, socioeconomic group, or existence of the Family Planning and Health Services Program. PMID:12291553
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Risk-based and deterministic regulation
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose.
Application in DSP/FPGA design of Matlab/Simulink
NASA Astrophysics Data System (ADS)
Liu, Yong-mei; Guan, Yong; Zhang, Jie; Wu, Min-hua; Wu, Lin-wei
2012-12-01
As an off-line simulation tool, the modular modelling method of Matlab/Simulik has the features of high efficiency and visualization. In order to realize the fast design and the simulation of prototype systems, the new method of SignalWAVe/Simulink mix modelling is presented, and the Reed-Solomon codec encoder-decoder model is built. Reed-Solomon codec encoder-decoder model is simulated by Simulink. Farther, the C language program and model the. out executable file are created by SignalWAVe RTW Options module, which completes the hard ware co-simulation. The simulation result conforms to the theoretical analysis, thus it has proven the validity and the feasibility of this method.
PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing
Soranzo, Alessandro; Grassi, Massimo
2014-01-01
PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment. PMID:25101013
Documentation generator for VHDL and MatLab source codes for photonic and electronic systems
NASA Astrophysics Data System (ADS)
Niton, B.; Pozniak, K. T.; Romaniuk, R. S.
2011-06-01
The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a new concept of software dedicated for documenting the source codes written in VHDL and MatLab. The work starts with the analysis of available documentation generators for both programming languages, with an emphasis on the open source solutions. There are presented own solutions which base on the Doxygen program available as a free license with the source code. The supporting tools for parsers building were used like Bison and Flex. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part one which describes the system concept. Part two describes the MatLab application. MatLab is used for description of the measured phenomena. Part three describes the VHDL application. VHDL is used for behavioral description of the optoelectronic system. All the proposed approach and application documents big, complex software configurations for large systems.
Development and testing of a user-friendly Matlab interface for the JHU turbulence database system
NASA Astrophysics Data System (ADS)
Graham, Jason; Frederix, Edo; Meneveau, Charles
2011-11-01
One of the challenges that faces researchers today is the ability to store large scale data sets in a way that promotes easy access to the data and sharing among the research community. A public turbulence database cluster has been constructed in which 27 terabytes of a direct numerical simulation of isotropic turbulence is stored (Li et al., 2008, JoT). The public database provides researchers the ability to retrieve subsets of the spatiotemporal data remotely from a client machine anywhere over the internet. In addition to C and Fortran client interfaces, we now present a new Matlab interface based on Matlab's intrinsic SOAP functions. The Matlab interface provides the benefit of a high-level programming language with a plethora of intrinsic functions and toolboxes. In this talk, we will discuss several aspects of the Matlab interface including its development, optimization, usage, and application to the isotropic turbulence data. We will demonstrate several examples (visualizations, statistical analysis, etc) which illustrate the tool. Supported by NSF (CDI-II, CMMI-0941530) and Eindhoven University of Technology's Masters internship program.
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Deterministic evolutionary game dynamics in finite populations.
Altrock, Philipp M; Traulsen, Arne
2009-07-01
Evolutionary game dynamics describes the spreading of successful strategies in a population of reproducing individuals. Typically, the microscopic definition of strategy spreading is stochastic such that the dynamics becomes deterministic only in infinitely large populations. Here, we present a microscopic birth-death process that has a fully deterministic strong selection limit in well-mixed populations of any size. Additionally, under weak selection, from this process the frequency-dependent Moran process is recovered. This makes it a natural extension of the usual evolutionary dynamics under weak selection. We find simple expressions for the fixation probabilities and average fixation times of the process in evolutionary games with two players and two strategies. For cyclic games with two players and three strategies, we show that the resulting deterministic dynamics crucially depends on the initial condition in a nontrivial way. PMID:19658731
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Deterministic mediated superdense coding with linear optics
NASA Astrophysics Data System (ADS)
Pavičić, Mladen
2016-02-01
We present a scheme of deterministic mediated superdense coding of entangled photon states employing only linear-optics elements. Ideally, we are able to deterministically transfer four messages by manipulating just one of the photons. Two degrees of freedom, polarization and spatial, are used. A new kind of source of heralded down-converted photon pairs conditioned on detection of another pair with an efficiency of 92% is proposed. Realistic probabilistic experimental verification of the scheme with such a source of preselected pairs is feasible with today's technology. We obtain the channel capacity of 1.78 bits for a full-fledged implementation.
Deterministic aggregation kinetics of superparamagnetic colloidal particles
NASA Astrophysics Data System (ADS)
Reynolds, Colin P.; Klop, Kira E.; Lavergne, François A.; Morrow, Sarah M.; Aarts, Dirk G. A. L.; Dullens, Roel P. A.
2015-12-01
We study the irreversible aggregation kinetics of superparamagnetic colloidal particles in two dimensions in the presence of an in-plane magnetic field at low packing fractions. Optical microscopy and image analysis techniques are used to follow the aggregation process and in particular study the packing fraction and field dependence of the mean cluster size. We compare these to the theoretically predicted scalings for diffusion limited and deterministic aggregation. It is shown that the aggregation kinetics for our experimental system is consistent with a deterministic mechanism, which thus shows that the contribution of diffusion is negligible.
Nine challenges for deterministic epidemic models
Roberts, Mick; Andreasen, Viggo; Lloyd, Alun; Pellis, Lorenzo
2016-01-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where super infection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems. PMID:25843383
Image Algebra Matlab language version 2.3 for image processing and compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric
2010-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation
Object-oriented Matlab adaptive optics toolbox
NASA Astrophysics Data System (ADS)
Conan, R.; Correia, C.
2014-08-01
Object-Oriented Matlab Adaptive Optics (OOMAO) is a Matlab toolbox dedicated to Adaptive Optics (AO) systems. OOMAO is based on a small set of classes representing the source, atmosphere, telescope, wavefront sensor, Deformable Mirror (DM) and an imager of an AO system. This simple set of classes allows simulating Natural Guide Star (NGS) and Laser Guide Star (LGS) Single Conjugate AO (SCAO) and tomography AO systems on telescopes up to the size of the Extremely Large Telescopes (ELT). The discrete phase screens that make the atmosphere model can be of infinite size, useful for modeling system performance on large time scales. OOMAO comes with its own parametric influence function model to emulate different types of DMs. The cone effect, altitude thickness and intensity profile of LGSs are also reproduced. Both modal and zonal modeling approach are implemented. OOMAO has also an extensive library of theoretical expressions to evaluate the statistical properties of turbulence wavefronts. The main design characteristics of the OOMAO toolbox are object-oriented modularity, vectorized code and transparent parallel computing. OOMAO has been used to simulate and to design the Multi-Object AO prototype Raven at the Subaru telescope and the Laser Tomography AO system of the Giant Magellan Telescope. In this paper, a Laser Tomography AO system on an ELT is simulated with OOMAO. In the first part, we set-up the class parameters and we link the instantiated objects to create the source optical path. Then we build the tomographic reconstructor and write the script for the pseudo-open-loop controller.
Flexible missile autopilot design studies with PC-MATLAB/386
NASA Technical Reports Server (NTRS)
Ruth, Michael J.
1989-01-01
Development of a responsive, high-bandwidth missile autopilot for airframes which have structural modes of unusually low frequency presents a challenging design task. Such systems are viable candidates for modern, state-space control design methods. The PC-MATLAB interactive software package provides an environment well-suited to the development of candidate linear control laws for flexible missile autopilots. The strengths of MATLAB include: (1) exceptionally high speed (MATLAB's version for 80386-based PC's offers benchmarks approaching minicomputer and mainframe performance); (2) ability to handle large design models of several hundred degrees of freedom, if necessary; and (3) broad extensibility through user-defined functions. To characterize MATLAB capabilities, a simplified design example is presented. This involves interactive definition of an observer-based state-space compensator for a flexible missile autopilot design task. MATLAB capabilities and limitations, in the context of this design task, are then summarized.
STATISTICAL ANALYSIS OF A DETERMINISTIC STOCHASTIC ORBIT
Kaufman, Allan N.; Abarbanel, Henry D.I.; Grebogi, Celso
1980-05-01
If the solution of a deterministic equation is stochastic (in the sense of orbital instability), it can be subjected to a statistical analysis. This is illustrated for a coded orbit of the Chirikov mapping. Statistical dependence and the Markov assumption are tested. The Kolmogorov-Sinai entropy is related to the probability distribution for the orbit.
Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB
Chatziioannou, Aristotelis; Moulos, Panagiotis; Kolisis, Fragiskos N
2009-01-01
Background The microarray data analysis realm is ever growing through the development of various tools, open source and commercial. However there is absence of predefined rational algorithmic analysis workflows or batch standardized processing to incorporate all steps, from raw data import up to the derivation of significantly differentially expressed gene lists. This absence obfuscates the analytical procedure and obstructs the massive comparative processing of genomic microarray datasets. Moreover, the solutions provided, heavily depend on the programming skills of the user, whereas in the case of GUI embedded solutions, they do not provide direct support of various raw image analysis formats or a versatile and simultaneously flexible combination of signal processing methods. Results We describe here Gene ARMADA (Automated Robust MicroArray Data Analysis), a MATLAB implemented platform with a Graphical User Interface. This suite integrates all steps of microarray data analysis including automated data import, noise correction and filtering, normalization, statistical selection of differentially expressed genes, clustering, classification and annotation. In its current version, Gene ARMADA fully supports 2 coloured cDNA and Affymetrix oligonucleotide arrays, plus custom arrays for which experimental details are given in tabular form (Excel spreadsheet, comma separated values, tab-delimited text formats). It also supports the analysis of already processed results through its versatile import editor. Besides being fully automated, Gene ARMADA incorporates numerous functionalities of the Statistics and Bioinformatics Toolboxes of MATLAB. In addition, it provides numerous visualization and exploration tools plus customizable export data formats for seamless integration by other analysis tools or MATLAB, for further processing. Gene ARMADA requires MATLAB 7.4 (R2007a) or higher and is also distributed as a stand-alone application with MATLAB Component Runtime
Blueprint XAS: a Matlab-based toolbox for the fitting and analysis of XAS spectra.
Delgado-Jaime, Mario Ulises; Mewis, Craig Philip; Kennepohl, Pierre
2010-01-01
Blueprint XAS is a new Matlab-based program developed to fit and analyse X-ray absorption spectroscopy (XAS) data, most specifically in the near-edge region of the spectrum. The program is based on a methodology that introduces a novel background model into the complete fit model and that is capable of generating any number of independent fits with minimal introduction of user bias [Delgado-Jaime & Kennepohl (2010), J. Synchrotron Rad. 17, 119-128]. The functions and settings on the five panels of its graphical user interface are designed to suit the needs of near-edge XAS data analyzers. A batch function allows for the setting of multiple jobs to be run with Matlab in the background. A unique statistics panel allows the user to analyse a family of independent fits, to evaluate fit models and to draw statistically supported conclusions. The version introduced here (v0.2) is currently a toolbox for Matlab. Future stand-alone versions of the program will also incorporate several other new features to create a full package of tools for XAS data processing. PMID:20029122
A MATLAB GUI based algorithm for modelling Magnetotelluric data
NASA Astrophysics Data System (ADS)
Timur, Emre; Onsen, Funda
2016-04-01
The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric
Are earthquakes an example of deterministic chaos?
NASA Technical Reports Server (NTRS)
Huang, Jie; Turcotte, Donald L.
1990-01-01
A simple mass-spring model is used to systematically examine the dynamical behavior introduced by fault zone heterogeneities. The model consists of two sliding blocks coupled to each other and to a constant velocity driver by elastic springs. The state of this system can be characterized by the positions of the two blocks relative to the driver. A simple static/dynamic friction law is used. When the system is symmetric, cyclic behavior is observed. For an asymmetric system, where the frictional forces for the two blocks are not equal, the solutions exhibit deterministic chaos. Chaotic windows occur repeatedly between regions of limit cycles on bifurcation diagrams. The model behavior is similar to that of the one-dimensional logistic map. The results provide substantial evidence that earthquakes are an example of deterministic chaos.
Deterministic dynamics in the minority game
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Johnson, N. F.
2002-01-01
The minority game (MG) behaves as a stochastically disturbed deterministic system due to the coin toss invoked to resolve tied strategies. Averaging over this stochasticity yields a description of the MG's deterministic dynamics via mapping equations for the strategy score and global information. The strategy-score map contains both restoring-force and bias terms, whose magnitudes depend on the game's quenched disorder. Approximate analytical expressions are obtained and the effect of ``market impact'' is discussed. The global-information map represents a trajectory on a de Bruijn graph. For small quenched disorder, a Eulerian trail represents a stable attractor. It is shown analytically how antipersistence arises. The response to perturbations and different initial conditions is also discussed.
The deterministic and statistical Burgers equation
NASA Astrophysics Data System (ADS)
Fournier, J.-D.; Frisch, U.
Fourier-Lagrangian representations of the UV-region inviscid-limit solutions of the equations of Burgers (1939) are developed for deterministic and random initial conditions. The Fourier-mode amplitude behavior of the deterministic case is characterized by complex singularities with fast decrease, power-law preshocks with k indices of about -4/3, and shocks with k to the -1. In the random case, shocks are associated with a k to the -2 spectrum which overruns the smaller wavenumbers and appears immediately under Gaussian initial conditions. The use of the Hopf-Cole solution in the random case is illustrated in calculations of the law of energy decay by a modified Kida (1979) method. Graphs and diagrams of the results are provided.
Shape-Controlled Deterministic Assembly of Nanowires.
Zhao, Yunlong; Yao, Jun; Xu, Lin; Mankin, Max N; Zhu, Yinbo; Wu, Hengan; Mai, Liqiang; Zhang, Qingjie; Lieber, Charles M
2016-04-13
Large-scale, deterministic assembly of nanowires and nanotubes with rationally controlled geometries could expand the potential applications of one-dimensional nanomaterials in bottom-up integrated nanodevice arrays and circuits. Control of the positions of straight nanowires and nanotubes has been achieved using several assembly methods, although simultaneous control of position and geometry has not been realized. Here, we demonstrate a new concept combining simultaneous assembly and guided shaping to achieve large-scale, high-precision shape controlled deterministic assembly of nanowires. We lithographically pattern U-shaped trenches and then shear transfer nanowires to the patterned substrate wafers, where the trenches serve to define the positions and shapes of transferred nanowires. Studies using semicircular trenches defined by electron-beam lithography yielded U-shaped nanowires with radii of curvature defined by inner surface of the trenches. Wafer-scale deterministic assembly produced U-shaped nanowires for >430,000 sites with a yield of ∼90%. In addition, mechanistic studies and simulations demonstrate that shaping results in primarily elastic deformation of the nanowires and show clearly the diameter-dependent limits achievable for accessible forces. Last, this approach was used to assemble U-shaped three-dimensional nanowire field-effect transistor bioprobe arrays containing 200 individually addressable nanodevices. By combining the strengths of wafer-scale top-down fabrication with diverse and tunable properties of one-dimensional building blocks in novel structural configurations, shape-controlled deterministic nanowire assembly is expected to enable new applications in many areas including nanobioelectronics and nanophotonics. PMID:26999059
GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide
Dunn, W.N.
1998-03-01
This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Atmospheric Downscaling using Genetic Programming
NASA Astrophysics Data System (ADS)
Zerenner, Tanja; Venema, Victor; Simmer, Clemens
2013-04-01
Coupling models for the different components of the Soil-Vegetation-Atmosphere-System requires up-and downscaling procedures. Subject of our work is the downscaling scheme used to derive high resolution forcing data for land-surface and subsurface models from coarser atmospheric model output. The current downscaling scheme [Schomburg et. al. 2010, 2012] combines a bi-quadratic spline interpolation, deterministic rules and autoregressive noise. For the development of the scheme, training and validation data sets have been created by carrying out high-resolution runs of the atmospheric model. The deterministic rules in this scheme are partly based on known physical relations and partly determined by an automated search for linear relationships between the high resolution fields of the atmospheric model output and high resolution data on surface characteristics. Up to now deterministic rules are available for downscaling surface pressure and partially, depending on the prevailing weather conditions, for near surface temperature and radiation. Aim of our work is to improve those rules and to find deterministic rules for the remaining variables, which require downscaling, e.g. precipitation or near surface specifc humidity. To accomplish that, we broaden the search by allowing for interdependencies between different atmospheric parameters, non-linear relations, non-local and time-lagged relations. To cope with the vast number of possible solutions, we use genetic programming, a method from machine learning, which is based on the principles of natural evolution. We are currently working with GPLAB, a Genetic Programming toolbox for Matlab. At first we have tested the GP system to retrieve the known physical rule for downscaling surface pressure, i.e. the hydrostatic equation, from our training data. We have found this to be a simple task to the GP system. Furthermore we have improved accuracy and efficiency of the GP solution by implementing constant variation and
Arc_Mat: a Matlab-based spatial data analysis toolbox
NASA Astrophysics Data System (ADS)
Liu, Xingjian; Lesage, James
2010-03-01
This article presents an overview of Arc_Mat, a Matlab-based spatial data analysis software package whose source code has been placed in the public domain. An earlier version of the Arc_Mat toolbox was developed to extract map polygon and database information from ESRI shapefiles and provide high quality mapping in the Matlab software environment. We discuss revisions to the toolbox that: utilize enhanced computing and graphing capabilities of more recent versions of Matlab, restructure the toolbox with object-oriented programming features, and provide more comprehensive functions for spatial data analysis. The Arc_Mat toolbox functionality includes basic choropleth mapping; exploratory spatial data analysis that provides exploratory views of spatial data through various graphs, for example, histogram, Moran scatterplot, three-dimensional scatterplot, density distribution plot, and parallel coordinate plots; and more formal spatial data modeling that draws on the extensive Spatial Econometrics Toolbox functions. A brief review of the design aspects of the revised Arc_Mat is described, and we provide some illustrative examples that highlight representative uses of the toolbox. Finally, we discuss programming with and customizing the Arc_Mat toolbox functionalities.
OPTICON: Pro-Matlab software for large order controlled structure design
NASA Technical Reports Server (NTRS)
Peterson, Lee D.
1989-01-01
A software package for large order controlled structure design is described and demonstrated. The primary program, called OPTICAN, uses both Pro-Matlab M-file routines and selected compiled FORTRAN routines linked into the Pro-Matlab structure. The program accepts structural model information in the form of state-space matrices and performs three basic design functions on the model: (1) open loop analyses; (2) closed loop reduced order controller synthesis; and (3) closed loop stability and performance assessment. The current controller synthesis methods which were implemented in this software are based on the Generalized Linear Quadratic Gaussian theory of Bernstein. In particular, a reduced order Optimal Projection synthesis algorithm based on a homotopy solution method was successfully applied to an experimental truss structure using a 58-state dynamic model. These results are presented and discussed. Current plans to expand the practical size of the design model to several hundred states and the intention to interface Pro-Matlab to a supercomputing environment are discussed.
A Parallel Controls Software Approach for PEP II: AIDA & Matlab Middle Layer
Wittmer, W.; Colocho, W.; White, G.; /SLAC
2007-11-06
The controls software in use at PEP II (Stanford Control Program - SCP) had originally been developed in the eighties. It is very successful in routine operation but due to its internal structure it is difficult and time consuming to extend its functionality. This is problematic during machine development and when solving operational issues. Routinely, data has to be exported from the system, analyzed offline, and calculated settings have to be reimported. Since this is a manual process, it is time consuming and error-prone. Setting up automated processes, as is done for MIA (Model Independent Analysis), is also time consuming and specific to each application. Recently, there has been a trend at light sources to use MATLAB as the platform to control accelerators using a 'MATLAB Middle Layer' (MML), and so called channel access (CA) programs to communicate with the low level control system (LLCS). This has proven very successful, especially during machine development time and trouble shooting. A special CA code, named AIDA (Accelerator Independent Data Access), was developed to handle the communication between MATLAB, modern software frameworks, and the SCP. The MML had to be adapted for implementation at PEP II. Colliders differ significantly in their designs compared to light sources, which poses a challenge. PEP II is the first collider at which this implementation is being done. We will report on this effort, which is still ongoing.
OCTBEC—A Matlab toolbox for optimal quantum control of Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Hohenester, Ulrich
2014-01-01
OCTBEC is a Matlab toolbox designed for optimal quantum control, within the framework of optimal control theory (OCT), of Bose-Einstein condensates (BEC). The systems we have in mind are ultracold atoms in confined geometries, where the dynamics takes place in one or two spatial dimensions, and the confinement potential can be controlled by some external parameters. Typical experimental realizations are atom chips, where the currents running through the wires produce magnetic fields that allow to trap and manipulate nearby atoms. The toolbox provides a variety of Matlab classes for simulations based on the Gross-Pitaevskii equation, the multi-configurational Hartree method for bosons, and on generic few-mode models, as well as optimization problems. These classes can be easily combined, which has the advantage that one can adapt the simulation programs flexibly for various applications.
Teaching real-time ultrasonic imaging with a 4-channel sonar array, TI C6711 DSK and MATLAB.
York, George W P; Welch, Thad B; Wright, Cameron H G
2005-01-01
Ultrasonic medical imaging courses often stop at the theory or MATLAB simulation level, since professors find it challenging to give the students the experience of designing a real-time ultrasonic system. Some of the practical problems of working with real-time data from the ultrasonic transducers can be avoided by working at lower frequencies (sonar to low ultrasound) range. To facilitate this, we have created a platform using the ease of MATLAB programming with the real-time processing capability of the low-cost Texas Instruments C6711 DSP starter kit and a 4-channel sonar array. With this platform students can design a B-mode or Color-Mode sonar system in the MATLAB environment. This paper will demonstrate how the platform can be used in the classroom to demonstrate the real-time signal processing stages including beamforming, multi-rate sampling, demodulation, filtering, image processing, echo imaging, and Doppler frequency estimation. PMID:15850134
SAR digital spotlight implementation in MATLAB
NASA Astrophysics Data System (ADS)
Dungan, Kerry E.; Gorham, LeRoy A.; Moore, Linda J.
2013-05-01
Legacy synthetic aperture radar (SAR) exploitation algorithms were image-based algorithms, designed to exploit complex and/or detected SAR imagery. In order to improve the efficiency of the algorithms, image chips, or region of interest (ROI) chips, containing candidate targets were extracted. These image chips were then used directly by exploitation algorithms for the purposes of target discrimination or identification. Recent exploitation research has suggested that performance can be improved by processing the underlying phase history data instead of standard SAR imagery. Digital Spotlighting takes the phase history data of a large image and extracts the phase history data corresponding to a smaller spatial subset of the image. In a typical scenario, this spotlighted phase history data will contain much fewer samples than the original data but will still result in an alias-free image of the ROI. The Digital Spotlight algorithm can be considered the first stage in a "two-stage backprojection" image formation process. As the first stage in two-stage backprojection, Digital Spotlighting filters the original phase history data into a number of "pseudo"-phase histories that segment the scene into patches, each of which contain a reduced number of samples compared to the original data. The second stage of the imaging process consists of standard backprojection. The data rate reduction offered by Digital Spotlighting improves the computational efficiency of the overall imaging process by significantly reducing the total number of backprojection operations. This paper describes the Digital Spotlight algorithm in detail and provides an implementation in MATLAB.
Paleomagnetic dating: Methods, MATLAB software, example
NASA Astrophysics Data System (ADS)
Hnatyshin, Danny; Kravchinsky, Vadim A.
2014-09-01
A MATLAB software tool has been developed to provide an easy to use graphical interface for the plotting and interpretation of paleomagnetic data. The tool takes either paleomagnetic directions or paleopoles and compares them to a user defined apparent polar wander path or secular variation curve to determine the age of a paleomagnetic sample. Ages can be determined in two ways, either by translating the data onto the reference curve, or by rotating it about a set location (e.g. sampling location). The results are then compiled in data tables which can be exported as an excel file. This data can also be plotted using variety of built-in stereographic projections, which can then be exported as an image file. This software was used to date the giant Sukhoi Log gold deposit in Russia. Sukhoi Log has undergone a complicated history of faulting, folding, metamorphism, and is the vicinity of many granitic bodies. Paleomagnetic analysis of Sukhoi Log allowed for the timing of large scale thermal or chemical events to be determined. Paleomagnetic analysis from gold mineralized black shales was used to define the natural remanent magnetization recorded at Sukhoi Log. The obtained paleomagnetic direction from thermal demagnetization produced a paleopole at 61.3°N, 155.9°E, with the semi-major axis and semi-minor axis of the 95% confidence ellipse being 16.6° and 15.9° respectively. This paleopole is compared to the Siberian apparent polar wander path (APWP) by translating the paleopole to the nearest location on the APWP. This produced an age of 255.2- 31.0+ 32.0Ma and is the youngest well defined age known for Sukhoi Log. We propose that this is the last major stage of activity at Sukhoi Log, and likely had a role in determining the present day state of mineralization seen at the deposit.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
Deterministic seismic design and evaluation criteria to meet probabilistic performance goals
Short, S.A. ); Murray, R.C.; Nelson, T.A. ); Hill, J.R. . Office of Safety Appraisals)
1990-12-01
For DOE facilities across the United States, seismic design and evaluation criteria are based on probabilistic performance goals. In addition, other programs such as Advanced Light Water Reactors, New Production Reactors, and IPEEE for commercial nuclear power plants utilize design and evaluation criteria based on probabilistic performance goals. The use of probabilistic performance goals is a departure from design practice for commercial nuclear power plants which have traditionally been designed utilizing a deterministic specification of earthquake loading combined with deterministic response evaluation methods and permissible behavior limits. Approaches which utilize probabilistic seismic hazard curves for specification of earthquake loading and deterministic response evaluation methods and permissible behavior limits are discussed in this paper. Through the use of such design/evaluation approaches, it may be demonstrated that there is high likelihood that probabilistic performance goals can be achieved. 12 refs., 2 figs., 9 tabs.
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
GSGPEs: A MATLAB code for computing the ground state of systems of Gross-Pitaevskii equations
NASA Astrophysics Data System (ADS)
Caliari, Marco; Rainer, Stefan
2013-03-01
GSGPEs is a Matlab/GNU Octave suite of programs for the computation of the ground state of systems of Gross-Pitaevskii equations. It can compute the ground state in the defocusing case, for any number of equations with harmonic or quasi-harmonic trapping potentials, in spatial dimension one, two or three. The computation is based on a spectral decomposition of the solution into Hermite functions and direct minimization of the energy functional through a Newton-like method with an approximate line-search strategy. Catalogue identifier: AENT_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1417 No. of bytes in distributed program, including test data, etc.: 13673 Distribution format: tar.gz Programming language: Matlab/GNU Octave. Computer: Any supporting Matlab/GNU Octave. Operating system: Any supporting Matlab/GNU Octave. RAM: About 100 MB for a single three-dimensional equation (test run output). Classification: 2.7, 4.9. Nature of problem: A system of Gross-Pitaevskii Equations (GPEs) is used to mathematically model a Bose-Einstein Condensate (BEC) for a mixture of different interacting atomic species. The equations can be used both to compute the ground state solution (i.e., the stationary order parameter that minimizes the energy functional) and to simulate the dynamics. For particular shapes of the traps, three-dimensional BECs can be also simulated by lower dimensional GPEs. Solution method: The ground state of a system of Gross-Pitaevskii equations is computed through a spectral decomposition into Hermite functions and the direct minimization of the energy functional. Running time: About 30 seconds for a single three-dimensional equation with d.o.f. 40 for each spatial direction (test run output).
Prediction of 2-level PWM inverter efficiency using MATLAB/Simulink
NASA Astrophysics Data System (ADS)
Kim, Yoon-Ho; Kim, Seong-Je
2015-10-01
This article proposes a direct approach for the prediction of inverter efficiency using MATLAB/Simulink, instead of an indirect loss calculation approach based on analytical models. In analytical approach, efficiency is obtained by calculating individual losses separately, such as switching losses, conduction losses and harmonic losses using analytical models. However, this approach requires accurate analytical models and complicated calculations, due to the variation in the switching frequency, switching transient and modulation techniques. In the proposed approach, the actual waveform of the inverter system is directly generated using MATLAB/Simulink. The instantaneous voltage and current waveform including switching transients are generated. Thus, the proposed approach is very simple and convenient for efficiency prediction. The proposed approach also works for any system parameters or control methods, such as various pulse-width modulation (PWM) techniques, different switching frequencies, switching devices and load types. The proposed approach can be adopted for the efficiency prediction of any switching strategies and any types of inverters such as neutral-point-clamped (NPC) inverters, H bridge inverters and H5 topology, since the topologies are modelled as circuits in the MATLAB/Simulink program and no analytical model is required for the proposed approach. Furthermore, the proposed approach can provide operation techniques and conditions such as PWM techniques and switching frequency that offer high efficiency. In this article, inverter performance is evaluated for various PWM techniques and switching frequencies. The PWM technique and switching frequency that offer high efficiency is obtained. Finally, the proposed approach is verified by experimental results.
Minimal Deterministic Physicality Applied to Cosmology
NASA Astrophysics Data System (ADS)
Valentine, John S.
This report summarizes ongoing research and development since our 2012 foundation paper, including the emergent effects of a deterministic mechanism for fermion interactions: (1) the coherence of black holes and particles using a quantum chaotic model; (2) wide-scale (anti)matter prevalence from exclusion and weak interaction during the fermion reconstitution process; and (3) red-shift due to variations of vacuum energy density. We provide a context for Standard Model fields, and show how gravitation can be accountably unified in the same mechanism, but not as a unified field.
Deterministic Switching in Bismuth Ferrite Nanoislands.
Morelli, Alessio; Johann, Florian; Burns, Stuart R; Douglas, Alan; Gregg, J Marty
2016-08-10
We report deterministic selection of polarization variant in bismuth BiFeO3 nanoislands via a two-step scanning probe microscopy procedure. The polarization orientation in a nanoisland is toggled to the desired variant after a reset operation by scanning a conductive atomic force probe in contact over the surface while a bias is applied. The final polarization variant is determined by the direction of the inhomogeneous in-plane trailing field associated with the moving probe tip. This work provides the framework for better control of switching in rhombohedral ferroelectrics and for a deeper understanding of exchange coupling in multiferroic nanoscale heterostructures toward the realization of magnetoelectric devices. PMID:27454612
Deterministic convergence in iterative phase shifting
Luna, Esteban; Salas, Luis; Sohn, Erika; Ruiz, Elfego; Nunez, Juan M.; Herrera, Joel
2009-03-10
Previous implementations of the iterative phase shifting method, in which the phase of a test object is computed from measurements using a phase shifting interferometer with unknown positions of the reference, do not provide an accurate way of knowing when convergence has been attained. We present a new approach to this method that allows us to deterministically identify convergence. The method is tested with a home-built Fizeau interferometer that measures optical surfaces polished to {lambda}/100 using the Hydra tool. The intrinsic quality of the measurements is better than 0.5 nm. Other possible applications for this technique include fringe projection or any problem where phase shifting is involved.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB
Lee, Leng-Feng
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.
Lee, Leng-Feng; Umberger, Brian R
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This
Identification of the Jiles-Atherton model parameters using random and deterministic searches
NASA Astrophysics Data System (ADS)
Del Moral Hernandez, Emilio; S. Muranaka, Carlos; Cardoso, José R.
2000-01-01
The five parameters of the Jiles-Atherton (J-A) model are identified using a simple program based on the Matlab platform which identifies the J-A parameters automatically from experimental B- H hysteresis curves of magnetic cores. This computational tool is based on adaptive adjustment of the J-A model parameters and conjugates its parametric non-linear coupled differential equations with techniques of simulated annealing.
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Ballistic annihilation and deterministic surface growth
NASA Astrophysics Data System (ADS)
Belitsky, Vladimir; Ferrari, Pablo A.
1995-08-01
A model of deterministic surface growth studied by Krug and Spohn, a model of the annihilating reaction A+B→inert studied by Elskens and Frisch, a one-dimensional three-color cyclic cellular automaton studied by Fisch, and a particular automaton that has the number 184 in the classification of Wolfram can be studied via a cellular automaton with stochastic initial data called ballistic annihilation. This automaton is defined by the following rules: At time t=0, one particle is put at each integer point of ℝ. To each particle, a velocity is assigned in such a way that it may be either +1 or -1 with probabilities 1/2, independent of the velocities of the other particles. As time goes on, each particle moves along ℝ at the velocity assigned to it and annihilates when it collides with another particle. In the present paper we compute the distribution of this automaton for each time t ∈ ℕ. We then use this result to obtain the hydrodynamic limit for the surface profile from the model of deterministic surface growth mentioned above. We also show the relation of this limit process to the process which we call moving local minimum of Brownian motion. The latter is the process B {/x min}, x ∈ ℝ, defined by B {/x min}≔min{ B y ; x-1≤ y≤ x+1} for every x ∈ ℝ, where B x , x ∈ ℝ, is the standard Brownian motion with B 0=0.
Moment equations for a piecewise deterministic PDE
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Lawley, Sean D.
2015-03-01
We analyze a piecewise deterministic PDE consisting of the diffusion equation on a finite interval Ω with randomly switching boundary conditions and diffusion coefficient. We proceed by spatially discretizing the diffusion equation using finite differences and constructing the Chapman-Kolmogorov (CK) equation for the resulting finite-dimensional stochastic hybrid system. We show how the CK equation can be used to generate a hierarchy of equations for the r-th moments of the stochastic field, which take the form of r-dimensional parabolic PDEs on {{Ω }r} that couple to lower order moments at the boundaries. We explicitly solve the first and second order moment equations (r = 2). We then describe how the r-th moment of the stochastic PDE can be interpreted in terms of the splitting probability that r non-interacting Brownian particles all exit at the same boundary; although the particles are non-interacting, statistical correlations arise due to the fact that they all move in the same randomly switching environment. Hence the stochastic diffusion equation describes two levels of randomness: Brownian motion at the individual particle level and a randomly switching environment. Finally, in the limit of fast switching, we use a quasi-steady state approximation to reduce the piecewise deterministic PDE to an SPDE with multiplicative Gaussian noise in the bulk and a stochastically-driven boundary.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement. PMID:23231099
Deterministic Creation of Macroscopic Cat States.
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane's opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane's position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
The Waveform Suite: A robust platform for accessing and manipulating seismic waveforms in MATLAB
NASA Astrophysics Data System (ADS)
Reyes, C. G.; West, M. E.; McNutt, S. R.
2009-12-01
The Waveform Suite, developed at the University of Alaska Geophysical Institute, is an open-source collection of MATLAB classes that provide a means to import, manipulate, display, and share waveform data while ensuring integrity of the data and stability for programs that incorporate them. Data may be imported from a variety of sources, such as Antelope, Winston databases, SAC files, SEISAN, .mat files, or other user-defined file formats. The waveforms being manipulated in MATLAB are isolated from their stored representations, relieving the overlying programs from the responsibility of understanding the specific format in which data is stored or retrieved. The waveform class provides an object oriented framework that simplifies manipulations to waveform data. Playing with data becomes easier because the tedious aspects of data manipulation have been automated. The user is able to change multiple waveforms simultaneously using standard mathematical operators and other syntactically familiar functions. Unlike MATLAB structs or workspace variables, the data stored within waveform class objects are protected from modification, and instead are accessed through standardized functions, such as get and set; these are already familiar to users of MATLAB’s graphical features. This prevents accidental or nonsensical modifications to the data, which in turn simplifies troubleshooting of complex programs. Upgrades to the internal structure of the waveform class are invisible to applications which use it, making maintenance easier. We demonstrate the Waveform Suite’s capabilities on seismic data from Okmok and Redoubt volcanoes. Years of data from Okmok were retrieved from Antelope and Winston databases. Using the Waveform Suite, we built a tremor-location program. Because the program was built on the Waveform Suite, modifying it to operate on real-time data from Redoubt involved only minimal code changes. The utility of the Waveform Suite as a foundation for large
Chirp Z-transform spectral zoom optimization with MATLAB.
Martin, Grant D.
2005-11-01
The MATLAB language has become a standard for rapid prototyping throughout all disciplines of engineering because the environment is easy to understand and use. Many of the basic functions included in MATLAB are those operations that are necessary to carry out larger algorithms such as the chirp z-transform spectral zoom. These functions include, but are not limited to mathematical operators, logical operators, array indexing, and the Fast Fourier Transform (FFT). However, despite its ease of use, MATLAB's technical computing language is interpreted and thus is not always capable of the memory management and performance of a compiled language. There are however, several optimizations that can be made within the chirp z-transform spectral zoom algorithm itself, and also to the MATLAB implementation in order to take full advantage of the computing environment and lower processing time and improve memory usage. To that end, this document's purpose is two-fold. The first demonstrates how to perform a chirp z-transform spectral zoom as well as an optimization within the algorithm that improves performance and memory usage. The second demonstrates a minor MATLAB language usage technique that can reduce overhead memory costs and improve performance.
Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology
2011-01-01
Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the
Piezoelectric Actuator Modeling Using MSC/NASTRAN and MATLAB
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.
2003-01-01
This paper presents a procedure for modeling structures containing piezoelectric actuators using MSCMASTRAN and MATLAB. The paper describes the utility and functionality of one set of validated modeling tools. The tools described herein use MSCMASTRAN to model the structure with piezoelectric actuators and a thermally induced strain to model straining of the actuators due to an applied voltage field. MATLAB scripts are used to assemble the dynamic equations and to generate frequency response functions. The application of these tools is discussed using a cantilever aluminum beam with a surface mounted piezoelectric actuator as a sample problem. Software in the form of MSCINASTRAN DMAP input commands, MATLAB scripts, and a step-by-step procedure to solve the example problem are provided. Analysis results are generated in terms of frequency response functions from deflection and strain data as a function of input voltage to the actuator.
Introduction to Multifractal Detrended Fluctuation Analysis in Matlab
Ihlen, Espen A. F.
2012-01-01
Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra. PMID:22675302
Statistical properties of deterministic Bernoulli flows
Radunskaya, A.E.
1992-12-31
This thesis presents several new theorems about the stability and the statistical properties of deterministic chaotic flows. Many concrete systems known to exhibit deterministic chaos have so far been shown to be of a class known as Bernoulli Flows. This class of flows is characterized by the Finitely Determined property, which can be checked in specific cases. The first theorem says that these flows can be modeled arbitrarily well for all time by continuous-time finite state Markov processes. In other words it is theoretically possible to model the flow arbitrarily well by a computer equipped with a roulette wheel. There follows a stability result, which says that one can distort the measurements made on the processes without affecting the approximation. These results are than applied to the problem of distinguishing deterministic chaos from stochastic processes in the analysis of time series. The second part of the thesis deals with a specific set of examples. Although it has been possible to analyze specific systems to determine whether they lie in the class of Bernoulli systems, the standard techniques rely on the construction of expanding and contracting fibers in the phase space of the system. These fibers are then used to coordinatize the phase space and to prove the existence of a hyperbolic structure. Unfortunately such methods may fail in the general case, where smoothness conditions and a small singular set cannot be assumed. For example, suppose the standard billiard flow on a square table with a perfectly round obstacle, which is known to be Bernoulli, is replaced by a similar flow on a table with a bumpy fractal-like obstacle: a model perhaps closer to nature. It is shown that these fibers no longer exist and hence cannot be used in the standard manner to prove Bernoulliness or ergodicity. But, one can use the fact that the class of Bernoulli flows is closed in the d-bar metric to show that this billard flow with a bumpy obstacle is in fact Bernoulli.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M
2004-12-08
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF targets, remote sensors, including
DNSLab: A gateway to turbulent flow simulation in Matlab
NASA Astrophysics Data System (ADS)
Vuorinen, V.; Keskinen, K.
2016-06-01
Computational fluid dynamics (CFD) research is increasingly much focused towards computationally intensive, eddy resolving simulation techniques of turbulent flows such as large-eddy simulation (LES) and direct numerical simulation (DNS). Here, we present a compact educational software package called DNSLab, tailored for learning partial differential equations of turbulence from the perspective of DNS in Matlab environment. Based on educational experiences and course feedback from tens of engineering post-graduate students and industrial engineers, DNSLab can offer a major gateway to turbulence simulation with minimal prerequisites. Matlab implementation of two common fractional step projection methods is considered: the 2d Fourier pseudo-spectral method, and the 3d finite difference method with 2nd order spatial accuracy. Both methods are based on vectorization in Matlab and the slow for-loops are thus avoided. DNSLab is tested on two basic problems which we have noted to be of high educational value: 2d periodic array of decaying vortices, and 3d turbulent channel flow at Reτ = 180. To the best of our knowledge, the present study is possibly the first to investigate efficiency of a 3d turbulent, wall bounded flow in Matlab. The accuracy and efficiency of DNSLab is compared with a customized OpenFOAM solver called rk4projectionFoam. Based on our experiences and course feedback, the main contribution of DNSLab consists of the following features. (i) The very compact Matlab implementation of present Navier-Stokes solvers provides a gateway to efficient learning of both, physics of turbulent flows, and simulation of turbulence. (ii) Only relatively minor prerequisites on fluid dynamics and numerical methods are required for using DNSLab. (iii) In 2d, interactive results for turbulent flow cases can be obtained. Even for a 3d channel flow, the solver is fast enough for nearly interactive educational use. (iv) DNSLab is made openly available and thus contributing to
Modelling Subsea Coaxial Cable as FIR Filter on MATLAB
NASA Astrophysics Data System (ADS)
Kanisin, D.; Nordin, M. S.; Hazrul, M. H.; Kumar, E. A.
2011-05-01
The paper presents the modelling of subsea coaxial cable as a FIR filter on MATLAB. The subsea coaxial cables are commonly used in telecommunication industry and, oil and gas industry. Furthermore, this cable is unlike a filter circuit, which is a "lumped network" as individual components appear as discrete items. Therefore, a subsea coaxial network can be represented as a digital filter. In overall, the study has been conducted using MATLAB to model the subsea coaxial channel model base on primary and secondary parameters of subsea coaxial cable.
Central limit behavior of deterministic dynamical systems
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Beck, Christian; Tsallis, Constantino
2007-04-01
We investigate the probability density of rescaled sums of iterates of deterministic dynamical systems, a problem relevant for many complex physical systems consisting of dependent random variables. A central limit theorem (CLT) is valid only if the dynamical system under consideration is sufficiently mixing. For the fully developed logistic map and a cubic map we analytically calculate the leading-order corrections to the CLT if only a finite number of iterates is added and rescaled, and find excellent agreement with numerical experiments. At the critical point of period doubling accumulation, a CLT is not valid anymore due to strong temporal correlations between the iterates. Nevertheless, we provide numerical evidence that in this case the probability density converges to a q -Gaussian, thus leading to a power-law generalization of the CLT. The above behavior is universal and independent of the order of the maximum of the map considered, i.e., relevant for large classes of critical dynamical systems.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. J., Jr.; Kazmierczak, M.
1991-01-01
The study focuses on a deterministic model of the surface roughness transition behavior of glaze ice and analyzes the initial smooth/rough transition location, bead formation, and the propagation of the transition location. Based on a hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary-layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael
1991-01-01
The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Fast combinatorial optimization using generalized deterministic annealing
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Ghosh, Joydeep; Bovik, Alan C.
1993-08-01
Generalized Deterministic Annealing (GDA) is a useful new tool for computing fast multi-state combinatorial optimization of difficult non-convex problems. By estimating the stationary distribution of simulated annealing (SA), GDA yields equivalent solutions to practical SA algorithms while providing a significant speed improvement. Using the standard GDA, the computational time of SA may be reduced by an order of magnitude, and, with a new implementation improvement, Windowed GDA, the time improvements reach two orders of magnitude with a trivial compromise in solution quality. The fast optimization of GDA has enabled expeditious computation of complex nonlinear image enhancement paradigms, such as the Piecewise Constant (PICO) regression examples used in this paper. To validate our analytical results, we apply GDA to the PICO regression problem and compare the results to other optimization methods. Several full image examples are provided that show successful PICO image enhancement using GDA in the presence of both Laplacian and Gaussian additive noise.
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Targeted activation in deterministic and stochastic systems
NASA Astrophysics Data System (ADS)
Eisenhower, Bryan; Mezić, Igor
2010-02-01
Metastable escape is ubiquitous in many physical systems and is becoming a concern in engineering design as these designs (e.g., swarms of vehicles, coupled building energetics, nanoengineering, etc.) become more inspired by dynamics of biological, molecular and other natural systems. In light of this, we study a chain of coupled bistable oscillators which has two global conformations and we investigate how specialized or targeted disturbance is funneled in an inverse energy cascade and ultimately influences the transition process between the conformations. We derive a multiphase averaged approximation to these dynamics which illustrates the influence of actions in modal coordinates on the coarse behavior of this process. An activation condition that predicts how the disturbance influences the rate of transition is then derived. The prediction tools are derived for deterministic dynamics and we also present analogous behavior in the stochastic setting and show a divergence from Kramers activation behavior under targeted activation conditions.
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
TRIAC II. A MatLab code for track measurements from SSNT detectors
NASA Astrophysics Data System (ADS)
Patiris, D. L.; Blekas, K.; Ioannides, K. G.
2007-08-01
A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent
Deterministic and non-deterministic switching in chains of magnetic hysterons.
Tanasa, R; Stancu, A
2011-10-26
This paper presents a fundamental analysis of a single-domain ferromagnetic particles chain hysteresis in perpendicular geometry as a prototype for ultra-high density memories. Due to magnetostatic long range interactions the system has a complex hysteresis but stable features can be found. The loop has a number of deterministic Barkhausen jumps and consequently a number of stable plateaus that could be used in multistate memories. The fundamental elements that sustain this behavior are shown and discussed. PMID:21969255
An Improved QRS Wave Group Detection Algorithm and Matlab Implementation
NASA Astrophysics Data System (ADS)
Zhang, Hongjun
This paper presents an algorithm using Matlab software to detect QRS wave group of MIT-BIH ECG database. First of all the noise in ECG be Butterworth filtered, and then analysis the ECG signal based on wavelet transform to detect the parameters of the principle of singularity, more accurate detection of the QRS wave group was achieved.
The sisterhood method of estimating maternal mortality: the Matlab experience.
Shahidullah, M
1995-01-01
This study reports the results of a test of validation of the sisterhood method of measuring the level of maternal mortality using data from a Demographic Surveillance System (DSS) operating since 1966 in Matlab, Bangladesh. The records of maternal deaths that occurred during 1976-90 in the Matlab DSS area were used. One of the deceased woman's surviving brothers or sisters, aged 15 or older and born to the same mother, was asked if the deceased sister had died of maternity-related causes. Of the 384 maternal deaths for which siblings were interviewed, 305 deaths were correctly reported, 16 deaths were underreported, and the remaining 63 were misreported as nonmaternal deaths. Information on maternity-related deaths obtained in a sisterhood survey conducted in the Matlab DSS area was compared with the information recorded in the DSS. Results suggest that in places similar to Matlab, the sisterhood method can be used to provide an indication of the level of maternal mortality if no other data exist, though the method will produce negative bias in maternal mortality estimates. PMID:7618193
Autonomous robot vision software design using Matlab toolboxes
NASA Astrophysics Data System (ADS)
Tedder, Maurice; Chung, Chan-Jin
2004-10-01
The purpose of this paper is to introduce a cost-effective way to design robot vision and control software using Matlab for an autonomous robot designed to compete in the 2004 Intelligent Ground Vehicle Competition (IGVC). The goal of the autonomous challenge event is for the robot to autonomously navigate an outdoor obstacle course bounded by solid and dashed lines on the ground. Visual input data is provided by a DV camcorder at 160 x 120 pixel resolution. The design of this system involved writing an image-processing algorithm using hue, satuaration, and brightness (HSB) color filtering and Matlab image processing functions to extract the centroid, area, and orientation of the connected regions from the scene. These feature vectors are then mapped to linguistic variables that describe the objects in the world environment model. The linguistic variables act as inputs to a fuzzy logic controller designed using the Matlab fuzzy logic toolbox, which provides the knowledge and intelligence component necessary to achieve the desired goal. Java provides the central interface to the robot motion control and image acquisition components. Field test results indicate that the Matlab based solution allows for rapid software design, development and modification of our robot system.
Equilibrium-Staged Separations Using Matlab and Mathematica
ERIC Educational Resources Information Center
Binous, Housam
2008-01-01
We show a new approach, based on the utilization of Matlab and Mathematica, for solving liquid-liquid extraction and binary distillation problems. In addition, the author shares his experience using these two softwares to teach equilibrium staged separations at the National Institute of Applied Sciences and Technology. (Contains 7 figures.)
MATLAB: Another Way To Teach the Computer in the Classroom.
ERIC Educational Resources Information Center
Marriott, Shaun
2002-01-01
Describes a pilot project for MATLAB work in both information communication technology (ICT) and mathematics. The ICT work is on flowcharts and algorithms and discusses ways of communicating with computers. Mathematics lessons involve early algebraic ideas of variables representing numbers. Presents an activity involving number sequences. (KHR)
Enhancing Teaching using MATLAB Add-Ins for Excel
ERIC Educational Resources Information Center
Hamilton, Paul V.
2004-01-01
In this paper I will illustrate how to extend the capabilities of Microsoft Excel spreadsheets with add-ins created by MATLAB. Excel provides a broad array of fundamental tools but often comes up short when more sophisticated scenarios are involved. To overcome this short-coming of Excel while retaining its ease of use, I will describe how…
MATLAB tensor classes for fast algorithm prototyping : source code.
Bader, Brett William; Kolda, Tamara Gibson
2004-10-01
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Deterministic particle transport in a ratchet flow
NASA Astrophysics Data System (ADS)
Beltrame, Philippe; Makhoul, Mounia; Joelson, Maminirina
2016-01-01
This study is motivated by the issue of the pumping of particle through a periodic modulated channel. We focus on a simplified deterministic model of small inertia particles within the Stokes flow framework that we call "ratchet flow." A path-following method is employed in the parameter space in order to retrace the scenario which from bounded periodic solutions leads to particle transport. Depending on whether the magnitude of the particle drag is moderate or large, two main transport mechanisms are identified in which the role of the parity symmetry of the flow differs. For large drag, transport is induced by flow asymmetry, while for moderate drag, since the full transport solution bifurcation structure already exists for symmetric settings, flow asymmetry only makes the transport effective. We analyzed the scenarios of current reversals for each mechanism as well as the role of synchronization. In particular we show that, for large drag, the particle drift is similar to phase slip in a synchronization problem.
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets
Robbins, Kay A.
2012-01-01
Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis. PMID:22654753
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method. PMID:21164839
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Shirk, M; Gilmer, G; Rubenchik, A
2004-09-24
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-dimensional features with 20-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels. For deterministic fabrication of high energy-density physics (HEDP) targets, it will be necessary both to fabricate features in a wide variety of materials as well as to understand and simulate the fabrication process. We continue to investigate, both in experiment and in modeling, the ablation/surface-modification processes that occur with the use of laser pulses that are near the ablation threshold fluence. During the first two years, we studied ablation of metals, and we used sub-ps laser pulses, because pulses shorter than the electron-phonon relaxation time offered the most precise control of the energy that can be deposited into a metal surface. The use of sub-ps laser pulses also allowed a decoupling of the energy-deposition process from the ensuing movement/ablation of the atoms from the solid, which simplified the modeling. We investigated the ablation of material from copper, gold, and nickel substrates. We combined the power of the 1-D hydrocode ''HYADES'' with the state-of-the-art, 3-D molecular dynamics simulations ''MDCASK'' in our studies. For FY04, we have stretched ourselves to investigate laser ablation of carbon, including chemically-assisted processes. We undertook this research, because the energy deposition that is required to perform direct sublimation of carbon is much higher than that to stimulate the reaction 2C + O{sub 2} => 2CO. Thus, extremely fragile carbon aerogels might survive the chemically-assisted process more readily than ablation via direct laser sublimation. We had planned to start by studying vitreous carbon and move onto carbon aerogels. We were able to obtain flat, high-quality vitreous carbon, which was easy to work on
Slow Orbit Feedback at the ALS Using Matlab
Portmann, G.
1999-03-25
The third generation Advanced Light Source (ALS) produces extremely bright and finely focused photon beams using undulatory, wigglers, and bend magnets. In order to position the photon beams accurately, a slow global orbit feedback system has been developed. The dominant causes of orbit motion at the ALS are temperature variation and insertion device motion. This type of motion can be removed using slow global orbit feedback with a data rate of a few Hertz. The remaining orbit motion in the ALS is only 1-3 micron rms. Slow orbit feedback does not require high computational throughput. At the ALS, the global orbit feedback algorithm, based on the singular valued decomposition method, is coded in MATLAB and runs on a control room workstation. Using the MATLAB environment to develop, test, and run the storage ring control algorithms has proven to be a fast and efficient way to operate the ALS.
A Covariance NMR Toolbox for MATLAB and OCTAVE
NASA Astrophysics Data System (ADS)
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.
A covariance NMR toolbox for MATLAB and OCTAVE.
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. PMID:21215669
Deterministic Function Computation with Chemical Reaction Networks*
Chen, Ho-Lin; Doty, David; Soloveichik, David
2013-01-01
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068
Reproducible and deterministic production of aspheres
NASA Astrophysics Data System (ADS)
Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian
2015-10-01
Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.
Deterministic versus stochastic trends: Detection and challenges
NASA Astrophysics Data System (ADS)
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Automated optimum design of wing structures. Deterministic and probabilistic approaches
NASA Technical Reports Server (NTRS)
Rao, S. S.
1982-01-01
The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Optical properties of graphene simulated in MATLAB using scattering matrices
NASA Astrophysics Data System (ADS)
Cariappa K., S.; Kumar, Anil
2016-04-01
Transmittance and absorbance spectrum of monolayer and bilayer graphene are simulated, in wavelength range 400-900nm, using scattering matrices of graphene and air. MATLAB is used for simulations studies and the results are in good agreement with the experimental values reported in the literature. The high transmittance values exhibited by graphene along with its electrical properties make it a potential alternative to conventional transparent conducting oxides.
Hybrid photovoltaic/thermal (PV/T) solar systems simulation with Simulink/Matlab
da Silva, R.M.; Fernandes, J.L.M.
2010-12-15
and perform reasonably well. The Simulink modeling platform has been mainly used worldwide on simulation of control systems, digital signal processing and electric circuits, but there are very few examples of application to solar energy systems modeling. This work uses the modular environment of Simulink/Matlab to model individual PV/T system components, and to assemble the entire installation layout. The results show that the modular approach strategy provided by Matlab/Simulink environment is applicable to solar systems modeling, providing good code scalability, faster developing time, and simpler integration with external computational tools, when compared with traditional imperative-oriented programming languages. (author)
3CCD image segmentation and edge detection based on MATLAB
NASA Astrophysics Data System (ADS)
He, Yong; Pan, Jiazhi; Zhang, Yun
2006-09-01
This research aimed to identify weeds from crops in early stage in the field operation by using image-processing technology. As 3CCD images offer greater binary value difference between weed and crop section than ordinary digital images taken by common cameras. It has 3 channels (green, red, ifred) which takes a snap-photo of the same area, and the three images can be composed into one image, which facilitates the segmentation of different areas. By the application of image-processing toolkit on MATLAB, the different areas in the image can be segmented clearly. As edge detection technique is the first and very important step in image processing, The different result of different processing method was compared. Especially, by using the wavelet packet transform toolkit on MATLAB, An image was preprocessed and then the edge was extracted, and getting more clearly cut image of edge. The segmentation methods include operations as erosion, dilation and other algorithms to preprocess the images. It is of great importance to segment different areas in digital images in field real time, so as to be applied in precision farming, to saving energy and herbicide and many other materials. At present time Large scale software as MATLAB on PC was used, but the computation can be reduced and integrated into a small embed system, which means that the application of this technique in agricultural engineering is feasible and of great economical value.
A Method to Separate Stochastic and Deterministic Information from Electrocardiograms
NASA Astrophysics Data System (ADS)
Gutiérrez, R. M.; Sandoval, L. A.
2005-01-01
In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.
Kotze, Ben; Jordaan, Gerrit
2014-01-01
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548
Integral-transport-based deterministic brachytherapy dose calculations
NASA Astrophysics Data System (ADS)
Zhou, Chuanyu; Inanc, Feyzi
2003-01-01
We developed a transport-equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The algorithm provided us with the capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. The algorithm results have been benchmarked against the results from the literature and MCNP results for isotropic point sources and volumetric sources.
HyDRa: control of parameters for deterministic polishing.
Ruiz, E; Salas, L; Sohn, E; Luna, E; Herrera, J; Quiros, F
2013-08-26
Deterministic hydrodynamic polishing with HyDRa requires a precise control of polishing parameters, such as propelling air pressure, slurry density, slurry flux and tool height. We describe the HyDRa polishing system and prove how precise, deterministic polishing can be achieved in terms of the control of these parameters. The polishing results of an 84 cm hyperbolic mirror are presented to illustrate how the stability of these parameters is important to obtain high-quality surfaces. PMID:24105579
CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei
2014-12-01
We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.
MultiElec: A MATLAB Based Application for MEA Data Analysis
Georgiadis, Vassilis; Stephanou, Anastasis; Townsend, Paul A.; Jackson, Thomas R.
2015-01-01
We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/. PMID:26076010
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Using STOQS and stoqstoolbox for in situ Measurement Data Access in Matlab
NASA Astrophysics Data System (ADS)
López-Castejón, F.; Schlining, B.; McCann, M. P.
2012-12-01
This poster presents the stoqstoolbox, an extension to Matlab that simplifies the loading of in situ measurement data directly from STOQS databases. STOQS (Spatial Temporal Oceanographic Query System) is a geospatial database tool designed to provide efficient access to data following the CF-NetCDF Discrete Samples Geometries convention. Data are loaded from CF-NetCDF files into a STOQS database where indexes are created on depth, spatial coordinates and other parameters, e.g. platform type. STOQS provides consistent, simple and efficient methods to query for data. For example, we can request all measurements with a standard_name of sea_water_temperature between two times and from between two depths. Data access is simpler because the data are retrieved by parameter irrespective of platform or mission file names. Access is more efficient because data are retrieved via the index on depth and only the requested data are retrieved from the database and transferred into the Matlab workspace. Applications in the stoqstoolbox query the STOQS database via an HTTP REST application programming interface; they follow the Data Access Object pattern, enabling highly customizable query construction. Data are loaded into Matlab structures that clearly indicate latitude, longitude, depth, measurement data value, and platform name. The stoqstoolbox is designed to be used in concert with other tools, such as nctoolbox, which can load data from any OPeNDAP data source. With these two toolboxes a user can easily work with in situ and other gridded data, such as from numerical models and remote sensing platforms. In order to show the capability of stoqstoolbox we will show an example of model validation using data collected during the May-June 2012 field experiment conducted by the Monterey Bay Aquarium Research Institute (MBARI) in Monterey Bay, California. The data are available from the STOQS server at http://odss.mbari.org/canon/stoqs_may2012/query/. Over 14 million data points of
Parallel distance matrix computation for Matlab data mining
NASA Astrophysics Data System (ADS)
Skurowski, Przemysław; Staniszewski, Michał
2016-06-01
The paper presents utility functions for computing of a distance matrix, which plays a crucial role in data mining. The goal in the design was to enable operating on relatively large datasets by overcoming basic shortcoming - computing time - with an interface easy to use. The presented solution is a set of functions, which were created with emphasis on practical applicability in real life. The proposed solution is presented along the theoretical background for the performance scaling. Furthermore, different approaches of the parallel computing are analyzed, including shared memory, which is uncommon in Matlab environment.
Causes of maternal mortality decline in Matlab, Bangladesh.
Chowdhury, Mahbub Elahi; Ahmed, Anisuddin; Kalim, Nahid; Koblinsky, Marge
2009-04-01
Bangladesh is distinct among developing countries in achieving a low maternal mortality ratio (MMR) of 322 per 100,000 livebirths despite the very low use of skilled care at delivery (13% nationally). This variation has also been observed in Matlab, a rural area in Bangladesh, where longitudinal data on maternal mortality are available since the mid-1970s. The current study investigated the possible causes of the maternal mortality decline in Matlab. The study analyzed 769 maternal deaths and 215,779 pregnancy records from the Health and Demographic Surveillance System (HDSS) and other sources of safe motherhood data in the ICDDR,B and government service areas in Matlab during 1976-2005. The major interventions that took place in both the areas since the early 1980s were the family-planning programme plus safe menstrual regulation services and safe motherhood interventions (midwives for normal delivery in the ICDDR,B service area from the late 1980s and equal access to comprehensive emergency obstetric care [EmOC] in public facilities for women from both the areas). National programmes for social development and empowerment of women through education and microcredit programmes were implemented in both the areas. The quantitative findings were supplemented by a qualitative study by interviewing local community care providers for their change in practices for maternal healthcare over time. After the introduction of the safe motherhood programme, reduction in maternal mortality was higher in the ICDDR,B service area (68.6%) than in the government service area (50.4%) during 1986-1989 and 2001-2005. Reduction in the number of maternal deaths due to the fertility decline was higher in the government service area (30%) than in the ICDDR,B service area (23%) during 1979-2005. In each area, there has been substantial reduction in abortion-related mortality--86.7% and 78.3%--in the ICDDR,B and government service areas respectively. Education of women was a strong predictor
Tomoeye: A Matlab package for visualization of three-dimensional tomographic models
NASA Astrophysics Data System (ADS)
Gorbatov, A.; Limaye, A.; Sambridge, M.
2004-04-01
The use of seismic imaging techniques is widespread. Numerous three-dimensional (3-D) tomographic models have been presented over the last 30 years and subsequently analyzed by a wider community of seismologists, geodynamicists, mineral physicists, and geochemists. However, platform-independent, open source, user-friendly software for interactive exploration of tomographic models does not exist. Here, we present a package for interactive visualization, analysis, and presentation of tomographic models. Using a set of four Matlab programs, multiscale tomographic models can be explored in Cartesian or spherical coordinate systems; data subsets can be extracted and combined; publication-quality figures can be produced; and Virtual Reality Modeling Language (VRML) models can be produced for 3-D visualization and publication on the World Wide Web. This type of freely available software package will encourage the distribution of tomographic models in a standardized form for independent peer review by the research community.
Simulation for Wind Turbine Generators -- With FAST and MATLAB-Simulink Modules
Singh, M.; Muljadi, E.; Jonkman, J.; Gevorgian, V.; Girsang, I.; Dhupia, J.
2014-04-01
This report presents the work done to develop generator and gearbox models in the Matrix Laboratory (MATLAB) environment and couple them to the National Renewable Energy Laboratory's Fatigue, Aerodynamics, Structures, and Turbulence (FAST) program. The goal of this project was to interface the superior aerodynamic and mechanical models of FAST to the excellent electrical generator models found in various Simulink libraries and applications. The scope was limited to Type 1, Type 2, and Type 3 generators and fairly basic gear-train models. Future work will include models of Type 4 generators and more-advanced gear-train models with increased degrees of freedom. As described in this study, implementation of the developed drivetrain model enables the software tool to be used in many ways. Several case studies are presented as examples of the many types of studies that can be performed using this tool.
GRace: a MATLAB-based application for fitting the discrimination-association model.
Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio
2014-01-01
The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed. PMID:26054728
Development of a Deterministic Ethernet Building blocks for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
Graphics development of DCOR: Deterministic combat model of Oak Ridge
Hunt, G.; Azmy, Y.Y.
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations I: Computation of stationary solutions
NASA Astrophysics Data System (ADS)
Antoine, Xavier; Duboscq, Romain
2014-11-01
This paper presents GPELab (Gross-Pitaevskii Equation Laboratory), an advanced easy-to-use and flexible Matlab toolbox for numerically simulating many complex physics situations related to Bose-Einstein condensation. The model equation that GPELab solves is the Gross-Pitaevskii equation. The aim of this first part is to present the physical problems and the robust and accurate numerical schemes that are implemented for computing stationary solutions, to show a few computational examples and to explain how the basic GPELab functions work. Problems that can be solved include: 1d, 2d and 3d situations, general potentials, large classes of local and nonlocal nonlinearities, multi-components problems, and fast rotating gases. The toolbox is developed in such a way that other physics applications that require the numerical solution of general Schrödinger-type equations can be considered. Catalogue identifier: AETU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETU_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 552 No. of bytes in distributed program, including test data, etc.: 611 289 Distribution format: tar.gz Programming language: Matlab. Computer: PC, Mac. Operating system: Windows, Mac OS, Linux. Has the code been vectorized or parallelized?: Yes RAM: 4000 Megabytes Classification: 2.7, 4.6, 7.7. Nature of problem: Computing stationary solutions for a class of systems (multi-components) of Gross-Pitaevskii equations in 1d, 2d and 3d. This program is particularly well designed for the computation of ground states of Bose-Einstein condensates as well as dynamics. Solution method: We use the imaginary-time method with a Semi-Implicit Backward Euler scheme, a pseudo-spectral approximation and a Krylov subspace method. Running time: From a few minutes
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Deterministic transformations of multipartite entangled states with tensor rank 2
Turgut, S.; Guel, Y.; Pak, N. K.
2010-01-15
Transformations involving only local operations assisted with classical communication are investigated for multipartite entangled pure states having tensor rank 2. All necessary and sufficient conditions for the possibility of deterministically converting truly multipartite, rank-2 states into each other are given. Furthermore, a chain of local operations that successfully achieves the transformation has been identified for all allowed transformations. The identified chains have two nice features: (1) each party needs to carry out at most one local operation and (2) all of these local operations are also deterministic transformations by themselves. Finally, it is found that there are disjoint classes of states, all of which can be identified by a single real parameter, which remain invariant under deterministic transformations.
Improved Modeling in a Matlab-Based Navigation System
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Larimore, Wallace E.
1999-01-01
An innovative approach to autonomous navigation is available for low earth orbit satellites. The system is developed in Matlab and utilizes an Extended Kalman Filter (EKF) to estimate the attitude and trajectory based on spacecraft magnetometer and gyro data. Preliminary tests of the system with real spacecraft data from the Rossi X-Ray Timing Explorer Satellite (RXTE) indicate the existence of unmodeled errors in the magnetometer data. Incorporating into the EKF a statistical model that describes the colored component of the effective measurement of the magnetic field vector could improve the accuracy of the trajectory and attitude estimates and also improve the convergence time. This model is identified as a first order Markov process. With the addition of the model, the EKF attempts to identify the non-white components of the noise allowing for more accurate estimation of the original state vector, i.e. the orbital elements and the attitude. Working in Matlab allows for easy incorporation of new models into the EKF and the resulting navigation system is generic and can easily be applied to future missions resulting in an alternative in onboard or ground-based navigation.
MATLAB toolbox for the regularized surface reconstruction from gradients
NASA Astrophysics Data System (ADS)
Harker, Matthew; O'Leary, Paul
2015-04-01
As Photometric Stereo is a means of measuring the gradient field of a surface, an essential step in the measurement of a surface structure is the reconstruction of a surface from its measured gradient field. Given that the surface normals are subject to noise, straightforward integration does not provide an adequate reconstruction of the surface. In fact, if the noise in the gradient can be considered to be Gaussian, the optimal reconstruction based on maximum likelihood principles is obtained by the method of least-squares. However, since the reconstruction of a surface from its gradient is an inverse problem, it is usually necessary to introduce some form of regularization of the solution. This paper describes and demonstrates the functionality of a library of MATLAB functions for the regularized reconstruction of a surface from its measured gradient field. The library of functions, entitled "Surface Reconstruction from Gradient Fields: grad2Surf Version 1.0" is available at the MATLAB file-exchange http://www.mathworks.com/matlabcentral/fileexchange/authors/321598 The toolbox is the culmination of a number of papers on the least-squares reconstruction of a surface from its measured gradient field, regularized solutions to the problem, and real-time implementations of the algorithms.1-4
KiT: a MATLAB package for kinetochore tracking
Armond, Jonathan W.; Vladimirou, Elina; McAinsh, Andrew D.; Burroughs, Nigel J.
2016-01-01
Summary: During mitosis, chromosomes are attached to the mitotic spindle via large protein complexes called kinetochores. The motion of kinetochores throughout mitosis is intricate and automated quantitative tracking of their motion has already revealed many surprising facets of their behaviour. Here, we present ‘KiT’ (Kinetochore Tracking)—an easy-to-use, open-source software package for tracking kinetochores from live-cell fluorescent movies. KiT supports 2D, 3D and multi-colour movies, quantification of fluorescence, integrated deconvolution, parallel execution and multiple algorithms for particle localization. Availability and implementation: KiT is free, open-source software implemented in MATLAB and runs on all MATLAB supported platforms. KiT can be downloaded as a package from http://www.mechanochemistry.org/mcainsh/software.php. The source repository is available at https://bitbucket.org/jarmond/kit and under continuing development. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: jonathan.armond@warwick.ac.uk PMID:27153705
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
NASA Astrophysics Data System (ADS)
Teng, Chu-Hsiang; Zhang, Lei; Hill, Tyler A.; Demory, Brandon; Deng, Hui; Ku, Pei-Cheng
2015-11-01
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng; Zhang, Lei; Hill, Tyler A.; Deng, Hui
2015-11-09
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Hunt, G. ); Azmy, Y.Y. )
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Multiparty Controlled Deterministic Secure Quantum Communication Through Entanglement Swapping
NASA Astrophysics Data System (ADS)
Dong, Li; Xiu, Xiao-Ming; Gao, Ya-Jun; Chi, Feng
A three-party controlled deterministic secure quantum communication scheme through entanglement swapping is proposed firstly. In the scheme, the sender needs to prepare a class of Greenberger-Horne-Zeilinger (GHZ) states which are used as quantum channel. The two communicators may securely communicate under the control of the controller if the quantum channel is safe. The roles of the sender, the receiver, and the controller can be exchanged owing to the symmetry of the quantum channel. Different from other controlled quantum secure communication schemes, the scheme needs lesser additional classical information for transferring secret information. Finally, it is generalized to a multiparty controlled deterministic secure quantum communication scheme.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
A fast algorithm for voxel-based deterministic simulation of X-ray imaging
NASA Astrophysics Data System (ADS)
Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee
2008-04-01
Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video
Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren
2010-01-01
. These scripts would require minor to moderate modifications for use elsewhere, primarily to customize directory navigation. If the user has some familiarity with MATLAB, or programming in general, these modifications should be easy. Although we originally anticipated needing the Image Processing Toolbox, the scripts in the appendixes do not require it. Thus, only the base installation of MATLAB is needed. Because fairly basic MATLAB functions are used, we expect that the script can be run successfully by versions earlier than 2009b.
ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data
NASA Astrophysics Data System (ADS)
Akca, Irfan
2016-04-01
ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discretized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.
CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.
Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid
2013-01-01
: The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening. PMID:23938087
ERIC Educational Resources Information Center
Schulz, Laura E.; Hooppell, Catherine; Jenkins, Adrianna C.
2008-01-01
Three studies look at whether the assumption of causal determinism (the assumption that all else being equal, causes generate effects deterministically) affects children's imitation of modeled actions. These studies show even when the frequency of an effect is matched, both preschoolers (N = 60; M = 56 months) and toddlers (N = 48; M = 18 months)…
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
A difference characteristic for one-dimensional deterministic systems
NASA Astrophysics Data System (ADS)
Shahverdian, A. Yu.; Apkarian, A. V.
2007-06-01
A numerical characteristic for one-dimensional deterministic systems reflecting its higher order difference structure is introduced. The comparison with Lyapunov exponent is given. A difference analogy for Eggleston theorem as well as an estimate for Hausdorff dimension of the difference attractor, formulated in terms of the new characteristic is proved.
Techniques to quantify the sensitivity of deterministic model uncertainties
Ishigami, T. ); Cazzoli, E. . Nuclear Energy Dept.); Khatib-Rahbar ); Unwin, S.D. )
1989-04-01
Several existing methods for the assessment of the sensitivity of output uncertainty distributions generated by deterministic computer models to the uncertainty distributions assigned to the input parameters are reviewed and new techniques are proposed. Merits and limitations of the various techniques are examined by detailed application to the suppression pool aerosol removal code (SPARC).
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
From deterministic cellular automata to coupled map lattices
NASA Astrophysics Data System (ADS)
García-Morales, Vladimir
2016-07-01
A general mathematical method is presented for the systematic construction of coupled map lattices (CMLs) out of deterministic cellular automata (CAs). The entire CA rule space is addressed by means of a universal map for CAs that we have recently derived and that is not dependent on any freely adjustable parameters. The CMLs thus constructed are termed real-valued deterministic cellular automata (RDCA) and encompass all deterministic CAs in rule space in the asymptotic limit κ \\to 0 of a continuous parameter κ. Thus, RDCAs generalize CAs in such a way that they constitute CMLs when κ is finite and nonvanishing. In the limit κ \\to ∞ all RDCAs are shown to exhibit a global homogeneous fixed-point that attracts all initial conditions. A new bifurcation is discovered for RDCAs and its location is exactly determined from the linear stability analysis of the global quiescent state. In this bifurcation, fuzziness gradually begins to intrude in a purely deterministic CA-like dynamics. The mathematical method presented allows to get insight in some highly nontrivial behavior found after the bifurcation.
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory. PMID:19257417
Polar format algorithm for SAR imaging with Matlab
NASA Astrophysics Data System (ADS)
Deming, Ross; Best, Matthew; Farrell, Sean
2014-06-01
Due to its computational efficiency, the polar format algorithm (PFA) is considered by many to be the workhorse for airborne synthetic aperture radar (SAR) imaging. PFA is implemented in spatial Fourier space, also known as "K-space", which is a convenient domain for understanding SAR performance metrics, sampling requirements, etc. In this paper the mathematics behind PFA are explained and computed examples are presented, both using simulated data, and experimental airborne radar data from the Air Force Research Laboratory (AFRL) Gotcha Challenge collect. In addition, a simple graphical method is described that can be used to model and predict wavefront curvature artifacts in PFA imagery, which are due to the limited validity of the underlying far-field approximation. The appendix includes Matlab code for computing SAR images using PFA.
Perinatal mortality attributable to complications of childbirth in Matlab, Bangladesh.
Kusiako, T.; Ronsmans, C.; Van der Paal, L.
2000-01-01
Very few population-based studies of perinatal mortality in developing countries have examined the role of intrapartum risk factors. In the present study, the proportion of perinatal deaths that are attributable to complications during childbirth in Matlab, Bangladesh, was assessed using community-based data from a home-based programme led by professional midwives between 1987 and 1993. Complications during labour and delivery--such as prolonged or obstructed labour, abnormal fetal position, and hypertensive diseases of pregnancy--increased the risk of perinatal mortality fivefold and accounted for 30% of perinatal deaths. Premature labour, which occurred in 20% of pregnancies, accounted for 27% of perinatal mortality. Better care by qualified staff during delivery and improved care of newborns should substantially reduce perinatal mortality in this study population. PMID:10859856
PLMaddon: a power-law module for the Matlab SBToolbox.
Vera, Julio; Sun, Cheng; Oertel, Yvonne; Wolkenhauer, Olaf
2007-10-01
PLMaddon is a General Public License (GPL) software module designed to expand the current version of the SBToolbox (a Matlab toolbox for systems biology; www.sbtoolbox.org) with a set of functions for the analysis of power-law models, a specific class of kinetic models, set in ordinary differential equations (ODE) and in which the kinetic orders can have positive/negative non-integer values. The module includes functions to generate power-law Taylor expansions of other ODE models (e.g. Michaelis-Menten type models), as well as algorithms to estimate steady-states. The robustness and sensitivity of the models can also be analysed and visualized by computing the power-law's logarithmic gains and sensitivities. PMID:17495997
Correlation of oscillatory behaviour in Matlab using wavelets
NASA Astrophysics Data System (ADS)
Pering, T. D.; Tamburello, G.; McGonigle, A. J. S.; Hanna, E.; Aiuppa, A.
2014-09-01
Here we present a novel computational signal processing approach for comparing two signals of equal length and sampling rate, suitable for application across widely varying areas within the geosciences. By performing a continuous wavelet transform (CWT) followed by Spearman's rank correlation coefficient analysis, a graphical depiction of links between periodicities present in the two signals is generated via two or three dimensional images. In comparison with alternate approaches, e.g., wavelet coherence, this technique is simpler to implement and provides far clearer visual identification of the inter-series relationships. In particular, we report on a Matlab® code which executes this technique, and examples are given which demonstrate the programme application with artificially generated signals of known periodicity characteristics as well as with acquired geochemical and meteorological datasets.
MATLAB code for estimating magnetic basement depth using prisms
NASA Astrophysics Data System (ADS)
Aydın, Ibrahim; Oksum, Erdinc
2012-09-01
There is a need, within both geophysical exploration and deep geophysical research, to estimate magnetic basement depth. Forward and inverse modeling studies to map the basement depth are commonly used within petroleum geophysics. To obtain the basement topography, modeling studies are made of the 2D profile data or 3D map data. In this study, a different algorithm was introduced to estimate the magnetic basement depth from map data. The algorithm is based on the analytical solution of exponential equations obtained from Fourier transformation of magnetic data. This algorithm has been tested on synthetic magnetic anomalies originated from multi-prisms. Following encouraging test results, the proposed algorithm was also tested on field data. The depths obtained from the proposed approach were satisfactory in comparison with the depths obtained from seismic survey cross-sections and boreholes. Basic MATLAB code is included in the Appendix.
A MATLAB GUI to study Ising model phase transition
NASA Astrophysics Data System (ADS)
Thornton, Curtislee; Datta, Trinanjan
We have created a MATLAB based graphical user interface (GUI) that simulates the single spin flip Metropolis Monte Carlo algorithm. The GUI has the capability to study temperature and external magnetic field dependence of magnetization, susceptibility, and equilibration behavior of the nearest-neighbor square lattice Ising model. Since the Ising model is a canonical system to study phase transition, the GUI can be used both for teaching and research purposes. The presence of a Monte Carlo code in a GUI format allows easy visualization of the simulation in real time and provides an attractive way to teach the concept of thermal phase transition and critical phenomena. We will also discuss the GUI implementation to study phase transition in a classical spin ice model on the pyrochlore lattice.
MATLAB tools for lidar data conversion, visualization, and processing
NASA Astrophysics Data System (ADS)
Wang, Xiao; Zhou, Kaijing; Yang, Jie; Lu, Yilong
2011-10-01
LIDAR (LIght Detection and Ranging) [1] is an optical remote sensing technology that has gained increasing acceptance for topographic mapping. LIDAR technology has higher accuracy than RADAR and has wide applications. The relevant commercial market for LIDAR has developed greatly in the last few years. LAS format is approved to be the standard data format for interchanging LIDAR data among different software developers, manufacturers and end users. LAS data format reduces the data size compared to ASCII data format. However, LAS data file can only be visualized by some expensive commercial software. There are some free tools available, but they are not user-friendly and have less or poor visualization functionality. This makes it difficult for researchers to investigate and use LIDAR data. Therefore, there is a need to develop an efficient and low cost LIDAR data toolbox. For this purpose we have developed a free and efficient Matlab tool for LIDAR data conversion, visualization and processing.
Matlab Cluster Ensemble Toolbox v. 1.0
2009-04-27
This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. With regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.
Matlab Stability and Control Toolbox: Trim and Static Stability Module
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2006-01-01
This paper presents the technical background of the Trim and Static module of the Matlab Stability and Control Toolbox. This module performs a low-fidelity stability and control assessment of an aircraft model for a set of flight critical conditions. This is attained by determining if the control authority available for trim is sufficient and if the static stability characteristics are adequate. These conditions can be selected from a prescribed set or can be specified to meet particular requirements. The prescribed set of conditions includes horizontal flight, take-off rotation, landing flare, steady roll, steady turn and pull-up/ push-over flight, for which several operating conditions can be specified. A mathematical model was developed allowing for six-dimensional trim, adjustable inertial properties, asymmetric vehicle layouts, arbitrary number of engines, multi-axial thrust vectoring, engine(s)-out conditions, crosswind and gyroscopic effects.
A smart grid simulation testbed using Matlab/Simulink
NASA Astrophysics Data System (ADS)
Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei
2014-06-01
The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.
Epidemiology of child deaths due to drowning in Matlab, Bangladesh.
Ahmed, M K; Rahman, M; van Ginneken, J
1999-04-01
A study based upon verbal autopsies conducted in a sample of children who died in Bangladesh during 1989-92 found that approximately 21% of deaths among children aged 1-4 years were due to drowning. Such mortality may be expected in Bangladesh, for its villages are usually surrounded and intersected by canals and rivers, and there are many ponds surrounding households which are used for bathing and washing year round. Children also play in these bodies of water, and most villages are inundated by the monsoon for several months each year. Drawn from the Matlab Demographic Surveillance System (DSS) operated by the International Center for Diarrheal Disease Research, Bangladesh (ICDDR,B), data are presented on the mortality of children aged 1-4 years due to drowning in Matlab thana, a rural area of Bangladesh, during 1983-95. 10-25% of child deaths during 1983-95 were due to drowning. The absolute risk of dying from drowning remained almost the same over the study period, but the proportion of drownings to all causes of death increased. Drowning is especially prevalent during the second year of life. Mother's age and parity significantly affect drowning, with the risk of dying from drowning increasing with mother's age and far more sharply with the number of living children in the family. Maternal education and dwelling space had no influence upon the risk of drowning. A major portion of these deaths could be averted if parents and other close relatives paid more attention to child safety. PMID:10342696
A MATLAB-Aided Method for Teaching Calculus-Based Business Mathematics
ERIC Educational Resources Information Center
Liang, Jiajuan; Pan, William S. Y.
2009-01-01
MATLAB is a powerful package for numerical computation. MATLAB contains a rich pool of mathematical functions and provides flexible plotting functions for illustrating mathematical solutions. The course of calculus-based business mathematics consists of two major topics: 1) derivative and its applications in business; and 2) integration and its…
A Matlab/Simulink-Based Interactive Module for Servo Systems Learning
ERIC Educational Resources Information Center
Aliane, N.
2010-01-01
This paper presents an interactive module for learning both the fundamental and practical issues of servo systems. This module, developed using Simulink in conjunction with the Matlab graphical user interface (Matlab-GUI) tool, is used to supplement conventional lectures in control engineering and robotics subjects. First, the paper introduces the…
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section`s Seismic Qualification Program for reactor restart.
Deterministically Polarized Fluorescence from Single Dye Molecules Aligned in Liquid Crystal Host
Lukishova, S.G.; Schmid, A.W.; Knox, R.; Freivald, P.; Boyd, R. W.; Stroud, Jr., C. R.; Marshall, K.L.
2005-09-30
We demonstrated for the first time to our konwledge deterministically polarized fluorescence from single dye molecules. Planar aligned nematic liquid crystal hosts provide deterministic alignment of single dye molecules in a preferred direction.
NASA Astrophysics Data System (ADS)
Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein
2010-01-01
In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be
Boyd, O.S.
2006-01-01
We have created a second-order finite-difference solution to the anisotropic elastic wave equation in three dimensions and implemented the solution as an efficient Matlab script. This program allows the user to generate synthetic seismograms for three-dimensional anisotropic earth structure. The code was written for teleseismic wave propagation in the 1-0.1 Hz frequency range but is of general utility and can be used at all scales of space and time. This program was created to help distinguish among various types of lithospheric structure given the uneven distribution of sources and receivers commonly utilized in passive source seismology. Several successful implementations have resulted in a better appreciation for subduction zone structure, the fate of a transform fault with depth, lithospheric delamination, and the effects of wavefield focusing and defocusing on attenuation. Companion scripts are provided which help the user prepare input to the finite-difference solution. Boundary conditions including specification of the initial wavefield, absorption and two types of reflection are available. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Boyd, Oliver S.
2006-03-01
We have created a second-order finite-difference solution to the anisotropic elastic wave equation in three dimensions and implemented the solution as an efficient Matlab script. This program allows the user to generate synthetic seismograms for three-dimensional anisotropic earth structure. The code was written for teleseismic wave propagation in the 1-0.1 Hz frequency range but is of general utility and can be used at all scales of space and time. This program was created to help distinguish among various types of lithospheric structure given the uneven distribution of sources and receivers commonly utilized in passive source seismology. Several successful implementations have resulted in a better appreciation for subduction zone structure, the fate of a transform fault with depth, lithospheric delamination, and the effects of wavefield focusing and defocusing on attenuation. Companion scripts are provided which help the user prepare input to the finite-difference solution. Boundary conditions including specification of the initial wavefield, absorption and two types of reflection are available.
Evaluation the initial estimators using deterministic minimum covariance determinant algorithm
NASA Astrophysics Data System (ADS)
Alrawashdeh, Mufda Jameel; Sabri, Shamsul Rijal Muhammad; Ismail, Mohd Tahir
2014-07-01
The aim of the study is to examine five initial estimators introduced by Hubert et al. [1] with five additional new initial estimators by using the Deterministic Minimum Covariance Determinant algorithm, DetMCD. The objective of the DetMCD is to robustify the location and scatter matrix parameters. Since these parameters are highly influenced by the presence of outliers, the DetMCD is a newly highly robust algorithm, where it is constructed to overcome the outlier's problem. DetMCD precedes the non-random subsets, which computes a small number of deterministic initial estimators and followed by concentration steps. Here, we are going to compare the DetMCD algorithm based on two groups of estimators - one with original five Huberts' estimators and the other five new estimators. The determinant values of these estimators are observed to evaluate the performance via several cases.
Deterministic blade row interactions in a centrifugal compressor stage
NASA Technical Reports Server (NTRS)
Kirtley, K. R.; Beach, T. A.
1991-01-01
The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic synthesis of mechanical NOON states in ultrastrong optomechanics
NASA Astrophysics Data System (ADS)
Macrí, V.; Garziano, L.; Ridolfo, A.; Di Stefano, O.; Savasta, S.
2016-07-01
We propose a protocol for the deterministic preparation of entangled NOON mechanical states. The system is constituted by two identical, optically coupled optomechanical systems. The protocol consists of two steps. In the first, one of the two optical resonators is excited by a resonant external π -like Gaussian optical pulse. When the optical excitation coherently partly transfers to the second cavity, the second step starts. It consists of sending simultaneously two additional π -like Gaussian optical pulses, one at each optical resonator, with specific frequencies. In the optomechanical ultrastrong coupling regime, when the coupling strength becomes a significant fraction of the mechanical frequency, we show that NOON mechanical states with quite high Fock states can be deterministically obtained. The operating range of this protocol is carefully analyzed. Calculations have been carried out taking into account the presence of decoherence, thermal noise, and imperfect cooling.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Comment on: Supervisory Asymmetric Deterministic Secure Quantum Communication
NASA Astrophysics Data System (ADS)
Kao, Shih-Hung; Tsai, Chia-Wei; Hwang, Tzonelih
2012-12-01
In 2010, Xiu et al. (Optics Communications 284:2065-2069, 2011) proposed several applications based on a new secure four-site distribution scheme using χ-type entangled states. This paper points out that one of these applications, namely, supervisory asymmetric deterministic secure quantum communication, is subject to an information leakage problem, in which the receiver can extract two bits of a three-bit secret message without the supervisor's permission. An enhanced protocol is proposed to resolve this problem.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population. PMID:26515172
Deterministic entanglement of two neutral atoms via Rydberg blockade
Zhang, X. L.; Isenhower, L.; Gill, A. T.; Walker, T. G.; Saffman, M.
2010-09-15
We demonstrate the deterministic entanglement of two individually addressed neutral atoms using a Rydberg blockade mediated controlled-not gate. Parity oscillation measurements reveal a Bell state fidelity of F=0.58{+-}0.04, which is above the entanglement threshold of F=0.5, without any correction for atom loss, and F=0.71{+-}0.05 after correcting for background collisional losses. The fidelity results are shown to be in good agreement with a detailed error model.
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Circulant Graph Modeling Deterministic Small-World Networks
NASA Astrophysics Data System (ADS)
Zhao, Chenggui
In recent years, many research works have revealed some technological networks including internet to be small-world networks, which is attracting attention from computer scientists. One can decide if or not a real network is Small-world by whether it has high local clustering and small average path distance which are the two distinguishing characteristics of small-world networks. So far, researchers have presented many small-world models by dynamically evolving a deterministic network into a small world one by stochastic adding vertices and edges to original networks. Rather few works focused on deterministic models. In this paper, as a important kind of Cayley graph, the circulant graph is proposed as models of deterministic small-world networks, thinking if its simple structures and significant adaptability. It shows circulant graph constructed in this document takes on the two expected characteristics of small word. This work should be useful because circulant graph has serviced as some models of communication and computer networks. The small world characteristic will be helpful to design and analysis of structure and performance.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Demographic noise can reverse the direction of deterministic selection.
Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E
2016-08-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Spatiotemporal calibration and resolution refinement of output from deterministic models.
Gilani, Owais; McKay, Lisa A; Gregoire, Timothy G; Guan, Yongtao; Leaderer, Brian P; Holford, Theodore R
2016-06-30
Spatiotemporal calibration of output from deterministic models is an increasingly popular tool to more accurately and efficiently estimate the true distribution of spatial and temporal processes. Current calibration techniques have focused on a single source of data on observed measurements of the process of interest that are both temporally and spatially dense. Additionally, these methods often calibrate deterministic models available in grid-cell format with pixel sizes small enough that the centroid of the pixel closely approximates the measurement for other points within the pixel. We develop a modeling strategy that allows us to simultaneously incorporate information from two sources of data on observed measurements of the process (that differ in their spatial and temporal resolutions) to calibrate estimates from a deterministic model available on a regular grid. This method not only improves estimates of the pollutant at the grid centroids but also refines the spatial resolution of the grid data. The modeling strategy is illustrated by calibrating and spatially refining daily estimates of ambient nitrogen dioxide concentration over Connecticut for 1994 from the Community Multiscale Air Quality model (temporally dense grid-cell estimates on a large pixel size) using observations from an epidemiologic study (spatially dense and temporally sparse) and Environmental Protection Agency monitoring stations (temporally dense and spatially sparse). Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790617
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Modeling of forward pump EDFA under pump power through MATLAB
NASA Astrophysics Data System (ADS)
Raghuwanshi, Sanjeev Kumar; Sharma, Reena
2015-05-01
Optical fiber loss is a limiting factor for high-speed optical network applications. However, the loss can be compensated by variety of optical amplifiers. Raman amplifier and EDFA amplifier are widely used in optical communication systems. There are certain advantages of EDFA over Raman amplifier like amplifying the signal at 1550 nm wavelength at which the fiber loss is minimum. Apart from that there is no pulse walk-off problem with an EDFA amplifier. With the advent of optical amplifiers like EDFA, it is feasible to achieve a high bit rate beyond terabits in optical network applications. In our study, a MATLAB simulink-based forward pumped EDFA (operating in C-band 1525-1565 nm) simulation platform has been devised to evaluate the following performance parameters like gain, noise figure, amplified spontaneous emission power variations of a forward pumped EDFA operating in C-band (1525-1565 nm) as functions of Er3+ fiber length, injected pump power, signal input power, and Er3+ doping density. The effect of an input pump power on gain and noise figure was illustrated graphically. It is possible to completely characterize and optimize the EDFA performance using our dynamic simulink test bed.
Fission gas bubble identification using MATLAB's image processing toolbox
Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.
2016-06-08
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
Intelligent land evaluation research based on Matlab and GIS
NASA Astrophysics Data System (ADS)
Li, Hua; Chen, Youchao; Huang, Haifeng; Wu, Hao
2011-02-01
Applying the neural network to the land evaluation, we can break through the limitations that the traditional approaches are impacted by the human factors. Back propagation neural network (BP neural network) was used to evaluate the land suitability of the Changling town of the Guangshui city, Hubei province, China. We first establish evaluation index system, these indexes include the soil contamination degree, the irrigation guaranteed rate, the drainage condition, the pH value, the organic matter content. Then we establish the BP neural network and use MatLab to write the code forming the network. The evaluation criteria were input the network to train it. Then the network performance was test until the network meets the requirements. The evaluation data of the Changling town was input as the vectors to the appropriate network which calculates to get output vectors. And the output vectors were transformed the evaluation levels that can be imported the ArcGIS software to create the land suitability assessment figure. We can draw the conclusion that the suitability for the paddy field of the unused land and the arable land is very high and the ChangLin town is suitable for the development of paddy field agriculture.
Evamapper: A Novel Matlab Toolbox For Evapotranspiration Mapping
NASA Astrophysics Data System (ADS)
Atasever, Ü. H.; Kesikoğlu, M. H.; Özkan, C.
2013-10-01
Water consumption has been exceeding as the world population increases. Therefore, it is very important to manage water resources with care as it is not an endless resource. The Water loss in regional scale is the key phenomena to accomplish this goal. One of the main components of this phenomenon is evapotraspiration (ET) due to being one of the most important parameter for the management of water resources. Until recent years, evapotranspiration calculations were performed locally, using data obtained from weather stations. But for a successful water management, regional evapotranspiration maps are required. Different approaches are used to compute regional ETs. Among them, the direct measurement methods are not cost-effective and regionalized. For costeffective and regional ET mapping, Surface Energy Balance Algorithm (SEBAL) is the most known and effective technique. In this study, EvaMapper Toolbox which is based on SEBAL approach are developed for regional evapotranspiration mapping in MATLAB. By this toolbox, researchers can apply SEBAL technique which has a very complex structure to their study area easily through entering regional parameter values.
Matlab Cluster Ensemble Toolbox v. 1.0
2009-04-27
This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. Withmore » regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.« less
Efficient MATLAB computations with sparse and factored tensors.
Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)
2006-12-01
In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
MATLAB Stability and Control Toolbox Trim and Static Stability Module
NASA Technical Reports Server (NTRS)
Kenny, Sean P.; Crespo, Luis
2012-01-01
MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.
Sub-surface single ion detection in diamond: A path for deterministic color center creation
NASA Astrophysics Data System (ADS)
Abraham, John; Aguirre, Brandon; Pacheco, Jose; Camacho, Ryan; Bielejec, Edward; Sandia National Laboratories Team
Deterministic single color center creation remains a critical milestone for the integrated use of diamond color centers. It depends on three components: focused ion beam implantation to control the location, yield improvement to control the activation, and single ion implantation to control the number of implanted ions. A surface electrode detector has been fabricated on diamond where the electron hole pairs generated during ion implantation are used as the detection signal. Results will be presented demonstrating single ion detection. The detection efficiency of the device will be described as a function of implant energy and device geometry. It is anticipated that the controlled introduction of single dopant atoms in diamond will provide a basis for deterministic single localized color centers. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy Office of Science. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.
2012-07-01
The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)
MATLAB toolbox for EnviSAT InSAR data processing, visualization, and analysis
NASA Astrophysics Data System (ADS)
Zhang, Zhidong; Ma, Zunjing; Chen, Ganlu; Chen, Yan; Lu, Yilong
2012-10-01
Interferometric Synthetic Aperture Radar (InSAR) is an emerging technology with increasing applications in for high precision interferometry and 3-D digital elevation model (DEM) ground mapping. This paper presents a user-friendly MATLAB Toolbox for enhanced InSAR applications based on European Space Agency (ESA) SAR missions. The developed MATLAB tools can provide high quality and flexible data processing, visualization and analyzing functions by tapping on MATLAB's rich and powerful mathematics and graphics tools. Case studies are presented to with enhanced InSAR and DEM processing, visualization, and analysis examples.
2012-01-01
Background The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. Results This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. Conclusion The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by
Sreeskandarajan, Sutharzan; Flowers, Michelle M.; Karro, John E.; Liang, Chun
2014-01-01
Summary: Palindromic sequences, or inverted repeats (IRs), in DNA sequences involve important biological processes such as DNA–protein binding, DNA replication and DNA transposition. Development of bioinformatics tools that are capable of accurately detecting perfect IRs can enable genome-wide studies of IR patterns in both prokaryotes and eukaryotes. Different from conventional string-comparison approaches, we propose a novel algorithm that uses a cumulative score system based on a prime number representation of nucleotide bases. We then implemented this algorithm as a MATLAB-based program for perfect IR detection. In comparison with other existing tools, our program demonstrates a high accuracy in detecting nested and overlapping IRs. Availability and implementation: The source code is freely available on (http://bioinfolab.miamioh.edu/bioinfolab/palindrome.php) Contact: liangc@miamioh.edu or karroje@miamioh.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24215021
tweezercalib 2.0: Faster version of MatLab package for precise calibration of optical tweezers
NASA Astrophysics Data System (ADS)
Hansen, Poul Martin; Tolić-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine
2006-03-01
We present a vectorized version of the MatLab (MathWorks Inc.) package tweezercalib for calibration of optical tweezers with precision. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum, as described in vs. 1 of the package [I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Matlab program for precision calibration of optical tweezers, Comput. Phys. Comm. 159 (2004) 225-240]. The graphical user interface allows the user to include or leave out each of these factors. Several "health tests" are applied to the experimental data during calibration, and test results are displayed graphically. Thus, the user can easily see whether the data comply with the theory used for their interpretation. Final calibration results are given with statistical errors and covariance matrix. New version program summaryTitle of program: tweezercalib Catalogue identifier: ADTV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference in CPC to previous version: I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Comput. Phys. Comm. 159 (2004) 225 Catalogue identifier of previous version: ADTV Does the new version supersede the original program: Yes Computer for which the program is designed and others on which it has been tested: General computer running MatLab (Mathworks Inc.) Operating systems under with the program has been tested: Windows2000, Windows-XP, Linux Programming language used: MatLab (Mathworks Inc.), standard license Memory required to execute with typical data: Of order four times the size of the data file High speed storage required: none No. of lines in distributed program, including test data, etc.: 135 989 No. of bytes in distributed program, including test data, etc.: 1 527 611 Distribution
a Matlab Geodetic Software for Processing Airborne LIDAR Bathymetry Data
NASA Astrophysics Data System (ADS)
Pepe, M.; Prezioso, G.
2015-04-01
The ability to build three-dimensional models through technologies based on satellite navigation systems GNSS and the continuous development of new sensors, as Airborne Laser Scanning Hydrography (ALH), data acquisition methods and 3D multi-resolution representations, have contributed significantly to the digital 3D documentation, mapping, preservation and representation of landscapes and heritage as well as to the growth of research in this fields. However, GNSS systems led to the use of the ellipsoidal height; to transform this height in orthometric is necessary to know a geoid undulation model. The latest and most accurate global geoid undulation model, available worldwide, is EGM2008 which has been publicly released by the U.S. National Geospatial-Intelligence Agency (NGA) EGM Development Team. Therefore, given the availability and accuracy of this geoid model, we can use it in geomatics applications that require the conversion of heights. Using this model, to correct the elevation of a point does not coincide with any node must interpolate elevation information of adjacent nodes. The purpose of this paper is produce a Matlab® geodetic software for processing airborne LIDAR bathymetry data. In particular we want to focus on the point clouds in ASPRS LAS format and convert the ellipsoidal height in orthometric. The algorithm, valid on the whole globe and operative for all UTM zones, allows the conversion of ellipsoidal heights using the EGM2008 model. Of this model we analyse the slopes which occur, in some critical areas, between the nodes of the undulations grid; we will focus our attention on the marine areas verifying the impact that the slopes have in the calculation of the orthometric height and, consequently, in the accuracy of the in the 3-D point clouds. This experiment will be carried out by analysing a LAS APRS file containing topographic and bathymetric data collected with LIDAR systems along the coasts of Oregon and Washington (USA).
LucidDraw: Efficiently visualizing complex biochemical networks within MATLAB
2010-01-01
Background Biochemical networks play an essential role in systems biology. Rapidly growing network data and versatile research activities call for convenient visualization tools to aid intuitively perceiving abstract structures of networks and gaining insights into the functional implications of networks. There are various kinds of network visualization software, but they are usually not adequate for visual analysis of complex biological networks mainly because of the two reasons: 1) most existing drawing methods suitable for biochemical networks have high computation loads and can hardly achieve near real-time visualization; 2) available network visualization tools are designed for working in certain network modeling platforms, so they are not convenient for general analyses due to lack of broader range of readily accessible numerical utilities. Results We present LucidDraw as a visual analysis tool, which features (a) speed: typical biological networks with several hundreds of nodes can be drawn in a few seconds through a new layout algorithm; (b) ease of use: working within MATLAB makes it convenient to manipulate and analyze the network data using a broad spectrum of sophisticated numerical functions; (c) flexibility: layout styles and incorporation of other available information about functional modules can be controlled by users with little effort, and the output drawings are interactively modifiable. Conclusions Equipped with a new grid layout algorithm proposed here, LucidDraw serves as an auxiliary network analysis tool capable of visualizing complex biological networks in near real-time with controllable layout styles and drawing details. The framework of the algorithm enables easy incorporation of extra biological information, if available, to influence the output layouts with predefined node grouping features. PMID:20074382
Improve Data Mining and Knowledge Discovery Through the Use of MatLab
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Martin, Dawn (Elliott); Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(R) (MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its
Improve Data Mining and Knowledge Discovery through the use of MatLab
NASA Technical Reports Server (NTRS)
Shaykahian, Gholan Ali; Martin, Dawn Elliott; Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(TradeMark)(MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and
MILAMIN: MATLAB-based finite element method solver for large problems
NASA Astrophysics Data System (ADS)
Dabrowski, M.; Krotkiewski, M.; Schmid, D. W.
2008-04-01
The finite element method (FEM) combined with unstructured meshes forms an elegant and versatile approach capable of dealing with the complexities of problems in Earth science. Practical applications often require high-resolution models that necessitate advanced computational strategies. We therefore developed "Million a Minute" (MILAMIN), an efficient MATLAB implementation of FEM that is capable of setting up, solving, and postprocessing two-dimensional problems with one million unknowns in one minute on a modern desktop computer. MILAMIN allows the user to achieve numerical resolutions that are necessary to resolve the heterogeneous nature of geological materials. In this paper we provide the technical knowledge required to develop such models without the need to buy a commercial FEM package, programming compiler-language code, or hiring a computer specialist. It has been our special aim that all the components of MILAMIN perform efficiently, individually and as a package. While some of the components rely on readily available routines, we develop others from scratch and make sure that all of them work together efficiently. One of the main technical focuses of this paper is the optimization of the global matrix computations. The performance bottlenecks of the standard FEM algorithm are analyzed. An alternative approach is developed that sustains high performance for any system size. Applied optimizations eliminate Basic Linear Algebra Subprograms (BLAS) drawbacks when multiplying small matrices, reduce operation count and memory requirements when dealing with symmetric matrices, and increase data transfer efficiency by maximizing cache reuse. Applying loop interchange allows us to use BLAS on large matrices. In order to avoid unnecessary data transfers between RAM and CPU cache we introduce loop blocking. The optimization techniques are useful in many areas as demonstrated with our MILAMIN applications for thermal and incompressible flow (Stokes) problems. We use
Development of the Borehole 2-D Seismic Tomography Software Using MATLAB
NASA Astrophysics Data System (ADS)
Nugraha, A. D.; Syahputra, A.; Fatkhan, F.; Sule, R.; Hendriyana, A.
2011-12-01
We developed 2-D borehole seismic tomography software that we called "EARTHMAX-2D TOMOGRAPHY" to image subsurface physical properties including P-wave and S-wave velocities between two boreholes. We used Graphic User Interface (GUI) facilities of MATLAB programming language to create the software. In this software, we used travel time of seismic waves from source to receiver by using pseudo bending ray tracing method as input for tomography inversion. We can also set up a model parameterization, initial velocity model, ray tracing processes, conduct borehole seismic tomography inversion, and finally visualize the inversion results. The LSQR method was applied to solve of tomography inversion solution. We provided the Checkerboard Test Resolution (CTR) to evaluate the model resolution of the tomography inversion. As validation of this developed software, we tested it for geotechnical purposes. We then conducted data acquisition in the "ITB X-field" that is located on ITB campus. We used two boreholes that have a depth of 39 meters. Seismic wave sources were generated by impulse generator and sparker and then they were recorded by borehole hydrophone string type 3. Later on, we analyzed and picked seismic arrival time as input for tomography inversion. As results, we can image the estimated weathering layer, sediment layer, and basement rock in the field depicted by seismic wave structures. More detailed information about the developed software will be presented. Keywords: borehole, tomography, earthmax-2D, inversion
GazeAlyze: a MATLAB toolbox for the analysis of eye movement data.
Berger, Christoph; Winkels, Martin; Lischke, Alexander; Höppner, Jacqueline
2012-06-01
This article presents GazeAlyze, a software package, written as a MATLAB (MathWorks Inc., Natick, MA) toolbox developed for the analysis of eye movement data. GazeAlyze was developed for the batch processing of multiple data files and was designed as a framework with extendable modules. GazeAlyze encompasses the main functions of the entire processing queue of eye movement data to static visual stimuli. This includes detecting and filtering artifacts, detecting events, generating regions of interest, generating spread sheets for further statistical analysis, and providing methods for the visualization of results, such as path plots and fixation heat maps. All functions can be controlled through graphical user interfaces. GazeAlyze includes functions for correcting eye movement data for the displacement of the head relative to the camera after calibration in fixed head mounts. The preprocessing and event detection methods in GazeAlyze are based on the software ILAB 3.6.8 Gitelman (Behav Res Methods Instrum Comput 34(4), 605-612, 2002). GazeAlyze is distributed free of charge under the terms of the GNU public license and allows code modifications to be made so that the program's performance can be adjusted according to a user's scientific requirements. PMID:21898158
Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox
NASA Astrophysics Data System (ADS)
Li, R. N.; Liu, X.; Liu, S. J.
2013-12-01
In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.
SPIDYAN, a MATLAB library for simulating pulse EPR experiments with arbitrary waveform excitation.
Pribitzer, Stephan; Doll, Andrin; Jeschke, Gunnar
2016-02-01
Frequency-swept chirp pulses, created with arbitrary waveform generators (AWGs), can achieve inversion over a range of several hundreds of MHz. Such passage pulses provide defined flip angles and increase sensitivity. The fact that spectra are not excited at once, but single transitions are passed one after another, can cause new effects in established pulse EPR sequences. We developed a MATLAB library for simulation of pulse EPR, which is especially suited for modeling spin dynamics in ultra-wideband (UWB) EPR experiments, but can also be used for other experiments and NMR. At present the command line controlled SPin DYnamics ANalysis (SPIDYAN) package supports one-spin and two-spin systems with arbitrary spin quantum numbers. By providing the program with appropriate spin operators and Hamiltonian matrices any spin system is accessible, with limits set only by available memory and computation time. Any pulse sequence using rectangular and linearly or variable-rate frequency-swept chirp pulses, including phase cycling can be quickly created. To keep track of spin evolution the user can choose from a vast variety of detection operators, including transition selective operators. If relaxation effects can be neglected, the program solves the Liouville-von Neumann equation and propagates spin density matrices. In the other cases SPIDYAN uses the quantum mechanical master equation and Liouvillians for propagation. In order to consider the resonator response function, which on the scale of UWB excitation limits bandwidth, the program includes a simple RLC circuit model. Another subroutine can compute waveforms that, for a given resonator, maintain a constant critical adiabaticity factor over the excitation band. Computational efficiency is enhanced by precomputing propagator lookup tables for the whole set of AWG output levels. The features of the software library are discussed and demonstrated with spin-echo and population transfer simulations. PMID:26773526
SPIDYAN, a MATLAB library for simulating pulse EPR experiments with arbitrary waveform excitation
NASA Astrophysics Data System (ADS)
Pribitzer, Stephan; Doll, Andrin; Jeschke, Gunnar
2016-02-01
Frequency-swept chirp pulses, created with arbitrary waveform generators (AWGs), can achieve inversion over a range of several hundreds of MHz. Such passage pulses provide defined flip angles and increase sensitivity. The fact that spectra are not excited at once, but single transitions are passed one after another, can cause new effects in established pulse EPR sequences. We developed a MATLAB library for simulation of pulse EPR, which is especially suited for modeling spin dynamics in ultra-wideband (UWB) EPR experiments, but can also be used for other experiments and NMR. At present the command line controlled SPin DYnamics ANalysis (SPIDYAN) package supports one-spin and two-spin systems with arbitrary spin quantum numbers. By providing the program with appropriate spin operators and Hamiltonian matrices any spin system is accessible, with limits set only by available memory and computation time. Any pulse sequence using rectangular and linearly or variable-rate frequency-swept chirp pulses, including phase cycling can be quickly created. To keep track of spin evolution the user can choose from a vast variety of detection operators, including transition selective operators. If relaxation effects can be neglected, the program solves the Liouville-von Neumann equation and propagates spin density matrices. In the other cases SPIDYAN uses the quantum mechanical master equation and Liouvillians for propagation. In order to consider the resonator response function, which on the scale of UWB excitation limits bandwidth, the program includes a simple RLC circuit model. Another subroutine can compute waveforms that, for a given resonator, maintain a constant critical adiabaticity factor over the excitation band. Computational efficiency is enhanced by precomputing propagator lookup tables for the whole set of AWG output levels. The features of the software library are discussed and demonstrated with spin-echo and population transfer simulations.
Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.
Koprowski, Robert
2015-11-01
The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816
NASA Technical Reports Server (NTRS)
Howard, Joseph
2007-01-01
The viewgraph presentation provides an introduction to the James Webb Space Telescope (JWST). The first part provides a brief overview of Matlab toolkits including CodeV, OSLO, and Zemax Toolkits. The toolkit overview examines purpose, layout, how Matlab gets data from CodeV, function layout, and using cvHELP. The second part provides examples of use with JWST, including wavefront sensitivities and alignment simulations.
MATLAB implementation of a dynamic clamp with bandwidth >125 KHz capable of generating INa at 37°C
Clausen, Chris; Valiunas, Virginijus; Brink, Peter R.; Cohen, Ira S.
2012-01-01
We describe the construction of a dynamic clamp with bandwidth >125 KHz that utilizes a high performance, yet low cost, standard home/office PC interfaced with a high-speed (16 bit) data acquisition module. High bandwidth is achieved by exploiting recently available software advances (code-generation technology, optimized real-time kernel). Dynamic-clamp programs are constructed using Simulink, a visual programming language. Blocks for computation of membrane currents are written in the high-level matlab language; no programming in C is required. The instrument can be used in single- or dual-cell configurations, with the capability to modify programs while experiments are in progress. We describe an algorithm for computing the fast transient Na+ current (INa) in real time, and test its accuracy and stability using rate constants appropriate for 37°C. We then construct a program capable of supplying three currents to a cell preparation: INa, the hyperpolarizing-activated inward pacemaker current (If), and an inward-rectifier K+ current (IK1). The program corrects for the IR drop due to electrode current flow, and also records all voltages and currents. We tested this program on dual patch-clamped HEK293 cells where the dynamic clamp controls a current-clamp amplifier and a voltage-clamp amplifier controls membrane potential, and current-clamped HEK293 cells where the dynamic clamp produces spontaneous pacing behavior exhibiting Na+ spikes in otherwise passive cells. PMID:23224681
ELDIN NAFEE, SHERIF SALAH
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time. This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
NASA Astrophysics Data System (ADS)
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Demonstration of deterministic and high fidelity squeezing of quantum information
Yoshikawa, Jun-ichi; Takei, Nobuyuki; Furusawa, Akira; Hayashi, Toshiki; Akiyama, Takayuki; Huck, Alexander; Andersen, Ulrik L.
2007-12-15
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an ancillary squeezed vacuum state, thus direct interaction between a strong pump and the quantum state is circumvented. We demonstrate three different squeezing levels for a coherent state input. This scheme is highly suitable for the fault-tolerant squeezing transformation in a continuous variable quantum computer.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Deterministic Single-Phonon Source Triggered by a Single Photon
NASA Astrophysics Data System (ADS)
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-01
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
Deterministic Superreplication of One-Parameter Unitary Transformations
NASA Astrophysics Data System (ADS)
Dür, W.; Sekatski, P.; Skotiniotis, M.
2015-03-01
We show that one can deterministically generate, out of N copies of an unknown unitary operation, up to N2 almost perfect copies. The result holds for all operations generated by a Hamiltonian with an unknown interaction strength. This generalizes a similar result in the context of phase-covariant cloning where, however, superreplication comes at the price of an exponentially reduced probability of success. We also show that multiple copies of unitary operations can be emulated by operations acting on a much smaller space, e.g., a magnetic field acting on a single n -level system allows one to emulate the action of the field on n2 qubits.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Application of deterministic chaos analysis to investigating CFB hydrodynamics
Yin, C.; Luo, Z.; Li, X.; Fang, M.; Ni, M.; Cen, K.
1997-12-31
This paper presents an application of deterministic chaos analysis to the behavior of a gas-solid circulating fluidized bed (CFB). Two improvements for the traditional algorithm are put forward: a rule and the mathematical model are present to determine the no-scale interval, and an improved formula and the corresponding recurrence formula are given to calculate distance. Calculation results for different operating conditions indicate that the correlation dimension and Kolmogorov entropy can be employed to characterize fluidization regimes and their transitions, and may be used to detect abnormal conditions in CFB.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. PMID:27341236
Deterministic Ants in Labyrinth — Information Gained by Map Sharing
NASA Astrophysics Data System (ADS)
Malinowski, Janusz; Kantelhardt, Jan W.; Kułakowski, Krzysztof
2013-06-01
A few ant robots are placed in a labyrinth, formed by a square lattice with a small number of corridors removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which it has visited. Once two ants meet, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.
Deterministic controlled remote state preparation using partially entangled quantum channel
NASA Astrophysics Data System (ADS)
Chen, Na; Quan, Dong Xiao; Yang, Hong; Pei, Chang Xing
2016-04-01
In this paper, we propose a novel scheme for deterministic controlled remote state preparation (CRSP) of arbitrary two-qubit states. Suitably chosen partially entangled state is used as the quantum channel. With proper projective measurements carried out by the sender and controller, the receiver can reconstruct the target state by means of appropriate unitary operation. Unit success probability can be achieved for arbitrary two-qubit states. Different from some previous CRSP schemes utilizing partially entangled channels, auxiliary qubit is not required in our scheme. We also show that the success probability is independent of the parameters of the partially entangled quantum channel.
Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling
Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.
2008-01-14
Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1
The integrated model for solving the single-period deterministic inventory routing problem
NASA Astrophysics Data System (ADS)
Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik
2016-08-01
This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.
Latanision, R.M.
1990-12-01
Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministic viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.
System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink
Meng Lin; Dong Hou; Zhihong Xu; Yanhua Yang; Ronghua Zhang
2006-07-01
Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, just can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Strongly Deterministic Population Dynamics in Closed Microbial Communities
NASA Astrophysics Data System (ADS)
Frentz, Zak; Kuehn, Seppe; Leibler, Stanislas
2015-10-01
Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
Goluoglu, S.; Bentley, C.; Demeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H. L.
1998-01-14
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems.
A deterministic method for transient, three-dimensional neutron transport
Goluoglu, S.; Bentley, C.; DeMeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H.L.
1998-05-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems.
Appropriate time scales for nonlinear analyses of deterministic jump systems
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya
2011-06-01
In the real world, there are many phenomena that are derived from deterministic systems but which fluctuate with nonuniform time intervals. This paper discusses the appropriate time scales that can be applied to such systems to analyze their properties. The financial markets are an example of such systems wherein price movements fluctuate with nonuniform time intervals. However, it is common to apply uniform time scales such as 1-min data and 1-h data to study price movements. This paper examines the validity of such time scales by using surrogate data tests to ascertain whether the deterministic properties of the original system can be identified from uniform sampled data. The results show that uniform time samplings are often inappropriate for nonlinear analyses. However, for other systems such as neural spikes and Internet traffic packets, which produce similar outputs, uniform time samplings are quite effective in extracting the system properties. Nevertheless, uniform samplings often generate overlapping data, which can cause false rejections of surrogate data tests.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space–time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Quantum secure direct communication and deterministic secure quantum communication
NASA Astrophysics Data System (ADS)
Long, Gui-Lu; Deng, Fu-Guo; Wang, Chuan; Li, Xi-Han; Wen, Kai; Wang, Wan-Ying
2007-07-01
In this review article, we review the recent development of quantum secure direct communication (QSDC) and deterministic secure quantum communication (DSQC) which both are used to transmit secret message, including the criteria for QSDC, some interesting QSDC protocols, the DSQC protocols and QSDC network, etc. The difference between these two branches of quantum communication is that DSQC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit, and QSDC does not. They are attractive because they are deterministic, in particular, the QSDC protocol is fully quantum mechanical. With sophisticated quantum technology in the future, the QSDC may become more and more popular. For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel, a quantum privacy amplification protocol has been proposed. It involves very simple CHC operations and reduces the information leakage to a negligible small level. Moreover, with the one-party quantum error correction, a relation has been established between classical linear codes and quantum one-party codes, hence it is convenient to transfer many good classical error correction codes to the quantum world. The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.
Deterministic nature of the underlying dynamics of surface wind fluctuations
NASA Astrophysics Data System (ADS)
Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.
2012-10-01
Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Deterministic Chaos in the X-ray Sources
NASA Astrophysics Data System (ADS)
Grzedzielski, M.; Sukova, P.; Janiuk, A.
2015-12-01
Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.
Made-to-order nanocarbons through deterministic plasma nanotechnology
NASA Astrophysics Data System (ADS)
Ren, Yuping; Xu, Shuyan; Rider, Amanda Evelyn; Ostrikov, Kostya (Ken)
2011-02-01
Through a combinatorial approach involving experimental measurement and plasma modelling, it is shown that a high degree of control over diamond-like nanocarbon film sp3/sp2 ratio (and hence film properties) may be exercised, starting at the level of electrons (through modification of the plasma electron energy distribution function). Hydrogenated amorphous carbon nanoparticle films with high percentages of diamond-like bonds are grown using a middle-frequency (2 MHz) inductively coupled Ar + CH4 plasma. The sp3 fractions measured by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy in the thin films are explained qualitatively using sp3/sp2 ratios 1) derived from calculated sp3 and sp2 hybridized precursor species densities in a global plasma discharge model and 2) measured experimentally. It is shown that at high discharge power and lower CH4 concentrations, the sp3/sp2 fraction is higher. Our results suggest that a combination of predictive modeling and experimental studies is instrumental to achieve deterministically grown made-to-order diamond-like nanocarbons suitable for a variety of applications spanning from nano-magnetic resonance imaging to spin-flip quantum information devices. This deterministic approach can be extended to graphene, carbon nanotips, nanodiamond and other nanocarbon materials for a variety of applications
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light. PMID:26214251
NASA Astrophysics Data System (ADS)
Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.
2013-05-01
During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.
Baca, Renee Nicole; Congdon, Michael L.; Brake, Matthew Robert
2014-07-01
In 2012, a Matlab GUI for the prediction of the coefficient of restitution was developed in order to enable the formulation of more accurate Finite Element Analysis (FEA) models of components. This report details the development of a new Rebound Dynamics GUI, and how it differs from the previously developed program. The new GUI includes several new features, such as source and citation documentation for the material database, as well as a multiple materials impact modeler for use with LMS Virtual.Lab Motion (LMS VLM), and a rigid body dynamics modeling software. The Rebound Dynamics GUI has been designed to work with LMS VLM to enable straightforward incorporation of velocity-dependent coefficients of restitution in rigid body dynamics simulations.
Quantum dissonance and deterministic quantum computation with a single qubit
NASA Astrophysics Data System (ADS)
Ali, Mazhar
2014-11-01
Mixed state quantum computation can perform certain tasks which are believed to be efficiently intractable on a classical computer. For a specific model of mixed state quantum computation, namely, deterministic quantum computation with a single qubit (DQC1), recent investigations suggest that quantum correlations other than entanglement might be responsible for the power of DQC1 model. However, strictly speaking, the role of entanglement in this model of computation was not entirely clear. We provide conclusive evidence that there are instances where quantum entanglement is not present in any part of this model, nevertheless we have advantage over classical computation. This establishes the fact that quantum dissonance (a kind of quantum correlations) present in fully separable (FS) states provide power to DQC1 model.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Deterministic Mutation Rate Variation in the Human Genome
Smith, Nick G.C.; Webster, Matthew T.; Ellegren, Hans
2002-01-01
Several studies of substitution rate variation have indicated that the local mutation rate varies over the mammalian genome. In the present study, we show significant variation in substitution rates within the noncoding part of the human genome using 4.7 Mb of human-chimpanzee pairwise comparisons. Moreover, we find a significant positive covariation of lineage-specific chimpanzee and human local substitution rates, and very similar mean substitution rates down the two lineages. The substitution rate variation is probably not caused by selection or biased gene conversion, and so we conclude that mutation rates vary deterministically across the noncoding nonrepetitive regions of the human genome. We also show that noncoding substitution rates are significantly affected by G+C base composition, partly because the base composition is not at equilibrium. PMID:12213772
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Deterministic nonclassicality for quantum-mechanical oscillators in thermal states
NASA Astrophysics Data System (ADS)
Marek, Petr; Lachman, Lukáš; Slodička, Lukáš; Filip, Radim
2016-07-01
Quantum nonclassicality is the basic building stone for the vast majority of quantum information applications and methods of its generation are at the forefront of research. One of the obstacles any method needs to clear is the looming presence of decoherence and noise which act against the nonclassicality and often erase it completely. In this paper we show that nonclassical states of a quantum harmonic oscillator initially in thermal equilibrium states can be deterministically created by coupling it to a single two-level system. This can be achieved even in the absorption regime in which the two-level system is initially in the ground state. The method is resilient to noise and it may actually benefit from it, as witnessed by the systems with higher thermal energy producing more nonclassical states.
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles. PMID:26565284
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
Scaling mobility patterns and collective movements: Deterministic walks in lattices
NASA Astrophysics Data System (ADS)
Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-05-01
Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasiperiodic synchronization of walkers’ directions. These results indicate that the coevolution of walkers’ least-action behavior and the landscape could be a potential origin of not only the individual scaling mobility patterns but also the flocks of animals. Our findings provide a bridge to connect the individual scaling mobility patterns and the ordered collective movements.
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
Deterministic Squeezed States with Collective Measurements and Feedback.
Cox, Kevin C; Greve, Graham P; Weiner, Joshua M; Thompson, James K
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N=5×10^{4} laser-cooled ^{87}Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms-comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N=4×10^{5} atoms. This is one of the largest reported entanglement enhancements to date in any system. PMID:26991175
Classification and unification of the microscopic deterministic traffic models
NASA Astrophysics Data System (ADS)
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Deterministic simulation of thermal neutron radiography and tomography
NASA Astrophysics Data System (ADS)
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
A Deterministic Approximation Algorithm for Maximum 2-Path Packing
NASA Astrophysics Data System (ADS)
Tanahashi, Ruka; Chen, Zhi-Zhong
This paper deals with the maximum-weight 2-path packing problem (M2PP), which is the problem of computing a set of vertex-disjoint paths of length 2 in a given edge-weighted complete graph so that the total weight of edges in the paths is maximized. Previously, Hassin and Rubinstein gave a randomized cubic-time approximation algorithm for M2PP which achieves an expected ratio of 35/67 - ε ≈ 0.5223 - ε for any constant ε > 0. We refine their algorithm and derandomize it to obtain a deterministic cubic-time approximation algorithm for the problem which achieves a better ratio (namely, 0.5265 - ε for any constant ε > 0).
Location deterministic biosensing from quantum-dot-nanowire assemblies.
Liu, Chao; Kim, Kwanoh; Fan, D L
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Deterministic Squeezed States with Collective Measurements and Feedback
NASA Astrophysics Data System (ADS)
Cox, Kevin C.; Greve, Graham P.; Weiner, Joshua M.; Thompson, James K.
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N =5 ×104 laser-cooled 87Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms—comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N =4 ×105 atoms . This is one of the largest reported entanglement enhancements to date in any system.
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Conservative deterministic spectral Boltzmann solver near the grazing collisions limit
NASA Astrophysics Data System (ADS)
Haack, Jeffrey R.; Gamba, Irene M.
2012-11-01
We present new results building on the conservative deterministic spectral method for the space homogeneous Boltzmann equation developed by Gamba and Tharkabhushaman. This approach is a two-step process that acts on the weak form of the Boltzmann equation, and uses the machinery of the Fourier transform to reformulate the collisional integral into a weighted convolution in Fourier space. A constrained optimization problem is solved to preserve the mass, momentum, and energy of the resulting distribution. Within this framework we have extended the formulation to the case of more general case of collision operators with anisotropic scattering mechanisms, which requires a new formulation of the convolution weights. We also derive the grazing collisions limit for the method, and show that it is consistent with the Fokker-Planck-Landau equations as the grazing collisions parameter goes to zero.
Simple deterministically constructed cycle reservoirs with regular jumps.
Rodan, Ali; Tiňo, Peter
2012-07-01
A new class of state-space models, reservoir models, with a fixed state transition structure (the "reservoir") and an adaptable readout from the state space, has recently emerged as a way for time series processing and modeling. Echo state network (ESN) is one of the simplest, yet powerful, reservoir models. ESN models are generally constructed in a randomized manner. In our previous study (Rodan & Tiňo, 2011), we showed that a very simple, cyclic, deterministically generated reservoir can yield performance competitive with standard ESN. In this contribution, we extend our previous study in three aspects. First, we introduce a novel simple deterministic reservoir model, cycle reservoir with jumps (CRJ), with highly constrained weight values, that has superior performance to standard ESN on a variety of temporal tasks of different origin and characteristics. Second, we elaborate on the possible link between reservoir characterizations, such as eigenvalue distribution of the reservoir matrix or pseudo-Lyapunov exponent of the input-driven reservoir dynamics, and the model performance. It has been suggested that a uniform coverage of the unit disk by such eigenvalues can lead to superior model performance. We show that despite highly constrained eigenvalue distribution, CRJ consistently outperforms ESN (which has much more uniform eigenvalue coverage of the unit disk). Also, unlike in the case of ESN, pseudo-Lyapunov exponents of the selected optimal CRJ models are consistently negative. Third, we present a new framework for determining the short-term memory capacity of linear reservoir models to a high degree of precision. Using the framework, we study the effect of shortcut connections in the CRJ reservoir topology on its memory capacity. PMID:22428595
Deterministic and Stochastic Receiver Clock Modeling in Precise Point Positioning
NASA Astrophysics Data System (ADS)
Orliac, E.; Dach, R.; Wang, K.; Rothacher, M.; Voithenleitner, D.; Hugentobler, U.; Heinze, M.; Svehla, D.
2012-04-01
The traditional GNSS (Global Navigation Satellite System) data analysis assumes an independent set of clock corrections for each epoch. This introduces a huge number of parameters that are highly correlated with station height and troposphere parameters. If the number of clock parameters can be reduced, the GNSS processing procedure may be stabilized. Experiments with kinematic solutions for stations equipped with H-Maser clocks have confirmed this. On the other hand, static coordinates do not significantly benefit from changing the strategy in handling the clock parameter. In the current GNSS constellation only GIOVE-B and the GPS Block IIF satellite clocks seem to be good enough to be modeled instead of freely estimated for each epoch without losing accuracy at the level of phase measurements. With the Galileo constellation this will change in future. In this context, ESA (European Space Agency) funded a project on "Satellite and Station Clock Modelling for GNSS". In the frame of this project, various deterministic and stochastic clock models have been evaluated, implemented and assessed for both, station and satellite clocks. In this paper we focus on the impact of modeling the receiver clock in the processing of GNSS data in static and kinematic precise point positioning (PPP) modes. Initial results show that for stations connected to an H-Maser clock the stability of the vertical position for kinematic PPP could be improved by up to 60%. The impact of clock modeling on the estimation of troposphere parameters is also investigated, along with the role of the tropospheric modeling itself, by testing various sampling rates and relative constraints for the troposphere parameters. Finally, we investigate the convergence time of PPP when deterministic or stochastic clock modeling is applied to the receiver clock.
Testing for deterministic trends in global sea surface temperature
NASA Astrophysics Data System (ADS)
Barbosa, Susana
2010-05-01
The identification and estimation of trends is a frequent and fundamental task in the analysis of hydrometeorological records. The task is challenging because even time series generated by purely random processes can exhibit visually appealing trends that can be misleadingly taken as evidence of non-stationary behavior. Hydrometeorological time series exhibiting long range dependence can also exhibit trend-like features that can be mistakenly interpreted as a trend, leading to erroneous forecasts and interpretations of the variability structure of the series, particularly in terms of statistical uncertainty. In practice the overwhelming majority of trends in hydro-climatic records are reported as the slope from a linear regression model. It is therefore important to assess when a linear regression model is a reasonable description for a time series. One could think that if a derived slope is statistically significant, particularly if inference is performed carefully, the linear regression model would be appropriate. However, stochastic features, such as long-range dependence can produce statistically significant linear trends. Therefore, the plausibility of the linear regression model needs to be tested itself, in addition to testing if the trend slope is statistically significant. In this work parametric statistical tests are applied in order to evaluate the trend-stationary assumption in global sea surface temperature for the period from January 1900 to December 2008. The fit of a linear deterministic model to the spatially-averaged global mean SST series yields a statistically significant positive slope, suggesting an increasing linear trend. However, statistical testing rejects the hypothesis of a deterministic linear trend with a stationary stochastic noise. This is supported by the form of the temporal structure of the detrended series, which exhibits large positive values up to lags of 5 years, indicating temporal persistence.
Deterministic Diffusion Fiber Tracking Improved by Quantitative Anisotropy
Yeh, Fang-Cheng; Verstynen, Timothy D.; Wang, Yibao; Fernández-Miranda, Juan C.; Tseng, Wen-Yih Isaac
2013-01-01
Diffusion MRI tractography has emerged as a useful and popular tool for mapping connections between brain regions. In this study, we examined the performance of quantitative anisotropy (QA) in facilitating deterministic fiber tracking. Two phantom studies were conducted. The first phantom study examined the susceptibility of fractional anisotropy (FA), generalized factional anisotropy (GFA), and QA to various partial volume effects. The second phantom study examined the spatial resolution of the FA-aided, GFA-aided, and QA-aided tractographies. An in vivo study was conducted to track the arcuate fasciculus, and two neurosurgeons blind to the acquisition and analysis settings were invited to identify false tracks. The performance of QA in assisting fiber tracking was compared with FA, GFA, and anatomical information from T1-weighted images. Our first phantom study showed that QA is less sensitive to the partial volume effects of crossing fibers and free water, suggesting that it is a robust index. The second phantom study showed that the QA-aided tractography has better resolution than the FA-aided and GFA-aided tractography. Our in vivo study further showed that the QA-aided tractography outperforms the FA-aided, GFA-aided, and anatomy-aided tractographies. In the shell scheme (HARDI), the FA-aided, GFA-aided, and anatomy-aided tractographies have 30.7%, 32.6%, and 24.45% of the false tracks, respectively, while the QA-aided tractography has 16.2%. In the grid scheme (DSI), the FA-aided, GFA-aided, and anatomy-aided tractographies have 12.3%, 9.0%, and 10.93% of the false tracks, respectively, while the QA-aided tractography has 4.43%. The QA-aided deterministic fiber tracking may assist fiber tracking studies and facilitate the advancement of human connectomics. PMID:24348913
Standard fluctuation-dissipation process from a deterministic mapping
NASA Astrophysics Data System (ADS)
Bianucci, Marco; Mannella, Riccardo; Fan, Ximing; Grigolini, Paolo; West, Bruce J.
1993-03-01
We illustrate a derivation of a standard fluctuation-dissipation process from a discrete deterministic dynamical model. This model is a three-dimensional mapping, driving the motion of three variables, w, ξ, and π. We show that for suitable values of the parameters of this mapping, the motion of the variable w is indistinguishable from that of a stochastic variable described by a Fokker-Planck equation with well-defined friction γ and diffusion D. This result can be explained as follows. The bidimensional system of the two variables ξ and π is a nonlinear, deterministic, and chaotic system, with the key property of resulting in a finite correlation time for the variable ξ and in a linear response of ξ to an external perturbation. Both properties are traced back to the fully chaotic nature of this system. When this subsystem is coupled to the variable w, via a very weak coupling guaranteeing a large-time-scale separation between the two systems, the variable w is proven to be driven by a standard fluctuation-dissipation process. We call the subsystem a booster whose chaotic nature triggers the standard fluctuation-dissipation process exhibited by the variable w. The diffusion process is a trivial consequence of the central-limit theorem, whose validity is assured by the finite time scale of the correlation function of ξ. The dissipation affecting the variable w is traced back to the linear response of the booster, which is evaluated adopting a geometrical procedure based on the properties of chaos rather than the conventional perturbation approach.
Eye growth and myopia development: Unifying theory and Matlab model.
Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal
2016-03-01
The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs
MATLAB-based automated patch-clamp system for awake behaving mice
Siegel, Jennifer J.; Taylor, William; Chitwood, Raymond A.; Johnston, Daniel
2015-01-01
Automation has been an important part of biomedical research for decades, and the use of automated and robotic systems is now standard for such tasks as DNA sequencing, microfluidics, and high-throughput screening. Recently, Kodandaramaiah and colleagues (Nat Methods 9: 585–587, 2012) demonstrated, using anesthetized animals, the feasibility of automating blind patch-clamp recordings in vivo. Blind patch is a good target for automation because it is a complex yet highly stereotyped process that revolves around analysis of a single signal (electrode impedance) and movement along a single axis. Here, we introduce an automated system for blind patch-clamp recordings from awake, head-fixed mice running on a wheel. In its design, we were guided by 3 requirements: easy-to-use and easy-to-modify software; seamless integration of behavioral equipment; and efficient use of time. The resulting system employs equipment that is standard for patch recording rigs, moderately priced, or simple to make. It is written entirely in MATLAB, a programming environment that has an enormous user base in the neuroscience community and many available resources for analysis and instrument control. Using this system, we obtained 19 whole cell patch recordings from neurons in the prefrontal cortex of awake mice, aged 8–9 wk. Successful recordings had series resistances that averaged 52 ± 4 MΩ and required 5.7 ± 0.6 attempts to obtain. These numbers are comparable with those of experienced electrophysiologists working manually, and this system, written in a simple and familiar language, will be useful to many cellular electrophysiologists who wish to study awake behaving mice. PMID:26084901
MATLAB-based automated patch-clamp system for awake behaving mice.
Desai, Niraj S; Siegel, Jennifer J; Taylor, William; Chitwood, Raymond A; Johnston, Daniel
2015-08-01
Automation has been an important part of biomedical research for decades, and the use of automated and robotic systems is now standard for such tasks as DNA sequencing, microfluidics, and high-throughput screening. Recently, Kodandaramaiah and colleagues (Nat Methods 9: 585-587, 2012) demonstrated, using anesthetized animals, the feasibility of automating blind patch-clamp recordings in vivo. Blind patch is a good target for automation because it is a complex yet highly stereotyped process that revolves around analysis of a single signal (electrode impedance) and movement along a single axis. Here, we introduce an automated system for blind patch-clamp recordings from awake, head-fixed mice running on a wheel. In its design, we were guided by 3 requirements: easy-to-use and easy-to-modify software; seamless integration of behavioral equipment; and efficient use of time. The resulting system employs equipment that is standard for patch recording rigs, moderately priced, or simple to make. It is written entirely in MATLAB, a programming environment that has an enormous user base in the neuroscience community and many available resources for analysis and instrument control. Using this system, we obtained 19 whole cell patch recordings from neurons in the prefrontal cortex of awake mice, aged 8-9 wk. Successful recordings had series resistances that averaged 52 ± 4 MΩ and required 5.7 ± 0.6 attempts to obtain. These numbers are comparable with those of experienced electrophysiologists working manually, and this system, written in a simple and familiar language, will be useful to many cellular electrophysiologists who wish to study awake behaving mice. PMID:26084901
GUARDD: user-friendly MATLAB software for rigorous analysis of CPMG RD NMR data.
Kleckner, Ian R; Foster, Mark P
2012-01-01
Molecular dynamics are essential for life, and nuclear magnetic resonance (NMR) spectroscopy has been used extensively to characterize these phenomena since the 1950s. For the past 15 years, the Carr-Purcell Meiboom-Gill relaxation dispersion (CPMG RD) NMR experiment has afforded advanced NMR labs access to kinetic, thermodynamic, and structural details of protein and RNA dynamics in the crucial μs-ms time window. However, analysis of RD data is challenging because datasets are often large and require many non-linear fitting parameters, thereby confounding assessment of accuracy. Moreover, novice CPMG experimentalists face an additional barrier because current software options lack an intuitive user interface and extensive documentation. Hence, we present the open-source software package GUARDD (Graphical User-friendly Analysis of Relaxation Dispersion Data), which is designed to organize, automate, and enhance the analytical procedures which operate on CPMG RD data ( http://code.google.com/p/guardd/). This MATLAB-based program includes a graphical user interface, permits global fitting to multi-field, multi-temperature, multi-coherence data, and implements χ (2)-mapping procedures, via grid-search and Monte Carlo methods, to enhance and assess fitting accuracy. The presentation features allow users to seamlessly traverse the large amount of results, and the RD Simulator feature can help design future experiments as well as serve as a teaching tool for those unfamiliar with RD phenomena. Based on these innovative features, we expect that GUARDD will fill a well-defined gap in service of the RD NMR community. PMID:22160811
nSTAT: open-source neural spike train analysis toolbox for Matlab.
Cajigas, I; Malik, W Q; Brown, E N
2012-11-15
Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - generalized linear model (PP-GLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT--an open source neural spike train analysis toolbox for Matlab®. By adopting an object-oriented programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419
Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator
NASA Technical Reports Server (NTRS)
Lewis, Emily K.; Vuong, Nghia D.
2012-01-01
This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.
Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data
Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes
2011-03-30
Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.
ERIC Educational Resources Information Center
Community College Journal, 1996
1996-01-01
Includes a collection of eight short articles describing model community college programs. Discusses a literacy program, a mobile computer classroom, a support program for at-risk students, a timber-harvesting program, a multimedia presentation on successful women graduates, a career center, a collaboration with NASA, and an Israeli engineering…
A platform for dynamic simulation and control of movement based on OpenSim and MATLAB.
Mansouri, Misagh; Reinbolt, Jeffrey A
2012-05-11
Numerical simulations play an important role in solving complex engineering problems and have the potential to revolutionize medical decision making and treatment strategies. In this paper, we combine the rapid model-based design, control systems and powerful numerical method strengths of MATLAB/Simulink with the simulation and human movement dynamics strengths of OpenSim by developing a new interface between the two software tools. OpenSim is integrated with Simulink using the MATLAB S-function mechanism, and the interface is demonstrated using both open-loop and closed-loop control systems. While the open-loop system uses MATLAB/Simulink to separately reproduce the OpenSim Forward Dynamics Tool, the closed-loop system adds the unique feature of feedback control to OpenSim, which is necessary for most human movement simulations. An arm model example was successfully used in both open-loop and closed-loop cases. For the open-loop case, the simulation reproduced results from the OpenSim Forward Dynamics Tool with root mean square (RMS) differences of 0.03° for the shoulder elevation angle and 0.06° for the elbow flexion angle. MATLAB's variable step-size integrator reduced the time required to generate the forward dynamic simulation from 7.1s (OpenSim) to 2.9s (MATLAB). For the closed-loop case, a proportional-integral-derivative controller was used to successfully balance a pole on model's hand despite random force disturbances on the pole. The new interface presented here not only integrates the OpenSim and MATLAB/Simulink software tools, but also will allow neuroscientists, physiologists, biomechanists, and physical therapists to adapt and generate new solutions as treatments for musculoskeletal conditions. PMID:22464351
Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?
NASA Astrophysics Data System (ADS)
Plag, H.; Earnest, D.; Jules-Plag, S.
2013-12-01
suggests an intriguing hypothesis: disaster risk reduction programs need to account for whether they also facilitate the public trust, cooperation, and communication needed to recover from a disaster. Our work in the Hampton Roads area, where the probability of hazardous flooding and inundation events exceeding the thresholds of the infrastructure is high, suggests that to facilitate the paradigm shift from the deterministic to a probabilistic approach, natural sciences have to focus on hazard probabilities, while engineering and social sciences have to work together to understand how interactions of the built and social environments impact robustness and resilience. The current science-policy relationship needs to be augmented by social structures that can learn from previous unexpected events. In this response to climate change, science does not have the primary goal to reduce uncertainties and prediction errors, but rather to develop processes that can utilize uncertainties and surprises to increase robustness, strengthen resilience, and reduce fragility of the social systems during times when infrastructure fails.
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Multi-Strain Deterministic Chaos in Dengue Epidemiology, A Challenge for Computational Mathematics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Kooi, Bob W.; Stollenwerk, Nico
2009-09-01
Recently, we have analysed epidemiological models of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever, known as antibody dependent enhancement (ADE). These models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Especially, Lyapunov spectra, which quantify the predictability horizon in the epidemiological system, are computationally very demanding. We show ways to speed up computations of such Lyapunov spectra by a factor of more than ten by parallelizing previously used sequential C programs. Such fast computations of Lyapunov spectra will be especially of use in future investigations of seasonally forced versions of the present models, as they are needed for data analysis.
Boudet, Samuel; Peyrodie, Laurent; Gallois, Philippe; de l'Aulnoit, Denis Houzé; Cao, Hua; Forzy, Gérard
2013-01-01
This paper presents a Matlab-based software (MathWorks inc.) called BioSigPlot for the visualization of multi-channel biomedical signals, particularly for the EEG. This tool is designed for researchers on both engineering and medicine who have to collaborate to visualize and analyze signals. It aims to provide a highly customizable interface for signal processing experimentation in order to plot several kinds of signals while integrating the common tools for physician. The main advantages compared to other existing programs are the multi-dataset displaying, the synchronization with video and the online processing. On top of that, this program uses object oriented programming, so that the interface can be controlled by both graphic controls and command lines. It can be used as EEGlab plug-in but, since it is not limited to EEG, it would be distributed separately. BioSigPlot is distributed free of charge (http://biosigplot.sourceforge.net), under the terms of GNU Public License for non-commercial use and open source development. PMID:24110098
Generalized Simulation Model for a Switched-Mode Power Supply Design Course Using MATLAB/SIMULINK
ERIC Educational Resources Information Center
Liao, Wei-Hsin; Wang, Shun-Chung; Liu, Yi-Hua
2012-01-01
Switched-mode power supplies (SMPS) are becoming an essential part of many electronic systems as the industry drives toward miniaturization and energy efficiency. However, practical SMPS design courses are seldom offered. In this paper, a generalized MATLAB/SIMULINK modeling technique is first presented. A proposed practical SMPS design course at…
VizieR Online Data Catalog: Transiting planets search Matlab/Octave source code (Ofir+, 2014)
NASA Astrophysics Data System (ADS)
Ofir, A.
2014-01-01
The Matlab/Octave source code for Optimal BLS is made available here. Detailed descriptions of all inputs and outputs are given by comment lines in the file. Note: Octave does not currently support parallel for loops ("parfor"). Octave users therefore need to change the "parfor" command (line 217 of OptimalBLS.m) to "for". (7 data files).
Computation, Exploration, Visualisation: Reaction to MATLAB in First-Year Mathematics.
ERIC Educational Resources Information Center
Cretchley, Patricia; Harman, Chris; Ellerton, Nerida; Fogarty, Gerard
This paper describes a model for effective incorporation of technology into the learning experience of a large and diverse group of students in first-semester first-year tertiary mathematics. It describes the introduction of elementary use of MATLAB, in a course offered both on-campus and at a distance. The diversity of the student group is…
ERIC Educational Resources Information Center
Sharp, J. S.; Glover, P. M.; Moseley, W.
2007-01-01
In this paper we describe the recent changes to the curriculum of the second year practical laboratory course in the School of Physics and Astronomy at the University of Nottingham. In particular, we describe how Matlab has been implemented as a teaching tool and discuss both its pedagogical advantages and disadvantages in teaching undergraduate…
Computer-Aided Teaching Using MATLAB/Simulink for Enhancing an IM Course With Laboratory Tests
ERIC Educational Resources Information Center
Bentounsi, A.; Djeghloud, H.; Benalla, H.; Birem, T.; Amiar, H.
2011-01-01
This paper describes an automatic procedure using MATLAB software to plot the circle diagram for two induction motors (IMs), with wound and squirrel-cage rotors, from no-load and blocked-rotor tests. The advantage of this approach is that it avoids the need for a direct load test in predetermining the IM characteristics under reduced power.…
ORNL ADCP POST-PROCESSING GUIDE AND MATLAB ALGORITHMS FOR MHK SITE FLOW AND TURBULENCE ANALYSIS
Gunawan, Budi; Neary, Vincent S
2011-09-01
Standard methods, along with guidance for post-processing the ADCP stationary measurements using MATLAB algorithms that were evaluated and tested by Oak Ridge National Laboratory (ORNL), are presented following an overview of the ADCP operating principles, deployment methods, error sources and recommended protocols for removing and replacing spurious data.
Preliminary versions of the MATLAB tensor classes for fast algorithm prototyping.
Bader, Brett William; Kolda, Tamara Gibson
2004-07-01
We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.
NASA Astrophysics Data System (ADS)
Sheldon, W.; Chamblee, J.; Cary, R. H.
2013-12-01
Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as
Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers
Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.
2015-01-01
An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079
Epileptic spike recognition in electroencephalogram using deterministic finite automata.
Keshri, Anup Kumar; Sinha, Rakesh Kumar; Hatwal, Rajesh; Das, Barda Nand
2009-06-01
This Paper presents an automated method of Epileptic Spike detection in Electroencephalogram (EEG) using Deterministic Finite Automata (DFA). It takes prerecorded single channel EEG data file as input and finds the occurrences of Epileptic Spikes data in it. The EEG signal was recorded at 256 Hz in two minutes separate data files using the Visual Lab-M software (ADLink Technology Inc., Taiwan). It was preprocessed for removal of baseline shift and band pass filtered using an infinite impulse response (IIR) Butterworth filter. A system, whose functionality was modeled with DFA, was designed. The system was tested with 10 EEG signal data files. The recognition rate of Epileptic Spike as on average was 95.68%. This system does not require any human intrusion. Also it does not need any short of training. The result shows that the application of DFA can be useful in detection of different characteristics present in EEG signals. This approach could be extended to a continuous data processing system. PMID:19408450
Is there a sharp phase transition for deterministic cellular automata
Wootters, W.K. Los Alamos National Lab., NM Williams Coll., Williamstown, MA . Dept. of Physics); Langton, C.G. )
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs.
Deterministic point inclusion methods for computational applications with complex geometry
Khamayseh, Ahmed; Kuprat, Andrew P.
2008-11-21
A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches.We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid ‘unclean’ or ‘singular’ intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.
DETERMINISTIC POINT INCLUSION METHODS FOR COMPUTATIONAL APPLICATIONS WITH COMPLEX GEOMETRY.
Khamayseh, Ahmed K; Kuprat, Andrew
2008-01-01
A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches. We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid "unclean" or "singular" intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.
SIR: Deterministic protein inference from peptides assigned to MS data.
Matthiesen, Rune; Prieto, Gorka; Amorim, Antonio; Aloria, Kerman; Fullaondo, Asier; Carvalho, Ana S; Arizmendi, Jesus M
2012-07-16
Currently the bottom up approach is the most popular for characterizing protein samples by mass spectrometry. This is mainly attributed to the fact that the bottom up approach has been successfully optimized for high throughput studies. However, the bottom up approach is associated with a number of challenges such as loss of linkage information between peptides. Previous publications have addressed some of these problems which are commonly referred to as protein inference. Nevertheless, all previous publications on the subject are oversimplified and do not represent the full complexity of the proteins identified. To this end we present here SIR (spectra based isoform resolver) that uses a novel transparent and systematic approach for organizing and presenting identified proteins based on peptide spectra assignments. The algorithm groups peptides and proteins into five evidence groups and calculates sixteen parameters for each identified protein that are useful for cases where deterministic protein inference is the goal. The novel approach has been incorporated into SIR which is a user-friendly tool only concerned with protein inference based on imports of Mascot search results. SIR has in addition two visualization tools that facilitate further exploration of the protein inference problem. PMID:22626983
Deterministic separation of suspended particles in a reconfigurable obstacle array
NASA Astrophysics Data System (ADS)
Du, Siqi; Drazer, German
2015-11-01
We use a macromodel of a flow-driven deterministic lateral displacement microfluidic system to investigate conditions leading to size-separation of suspended particles. This model system can be easily reconfigured to establish an arbitrary forcing angle, i.e. the orientation between the average flow field and the square array of cylindrical posts that constitutes the stationary phase. We also consider posts of different diameters, while maintaining a constant gap between them, to investigate the effect of obstacle size on particle separation. In all cases, we observe the presence of a locked mode at small forcing angles, in which particles move along a principal direction in the lattice. A locked-to-zigzag mode transition takes place when the orientation of the driving force reaches a critical angle. We show that the transition occurs at increasing angles for larger particles, thus enabling particle separation. Moreover, we observe a linear regression between the critical angle and the size of the particles, which allows us to estimate size-resolution in these systems. The presence of such a linear relation would guide the selection of the forcing angle in microfluidic systems, in which the direction of the flow field with respect to the array of obstacles is fixed. Finally, we present a simple model based on the presence of irreversible interactions between the suspended particles and the obstacles, which describes the observed dependence of the migration angle on the orientation of the average flow.
Deterministic Control of two Fermions in a Double Well
NASA Astrophysics Data System (ADS)
Lompe, Thomas; Murmann, Simon; Bergschneider, Andrea; Klinkhamer, Vincent; Zuern, Gerhard; Jochim, Selim
2014-05-01
The behavior of an ensemble of fermionic particles confined in a periodic potential is one of the richest topics of condensed matter physics. The simplest and most widely used theoretical description of such systems is provided by the Fermi-Hubbard Hamiltonian. We realize this Hamiltonian by deterministically preparing systems of two fermionic atoms trapped in a double well potential in a quantum state of our choice. We have studied the tunneling dynamics of this system as a function of the interparticle interactions and found good agreement with theoretical expectations. We have thus obtained a single-site addressable realization of the Fermi-Hubbard model where all parameters can be fully controlled and freely tuned. As a first experiment we prepared systems of one | ↑ > and one | ↓ > atom in the ground state of the double well, introduced repulsive (attractive) interparticle interactions and observed the crossover into a Mott-insulating (charge-density-wave) regime by measuring the occupation statistics of the individual sites. By adding a third well to the system this approach could be be used to directly observe ordered charge-density-waves and antiferromagnetic ordering. Now at Massachusetts Institute of Technology.
Deterministic methods for multi-control fuel loading optimization
NASA Astrophysics Data System (ADS)
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications. PMID:21599256
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Bioinspired, mechanical, deterministic fractal model for hierarchical suture joints
NASA Astrophysics Data System (ADS)
Li, Yaning; Ortiz, Christine; Boyce, Mary C.
2012-03-01
Many biological systems possess hierarchical and fractal-like interfaces and joint structures that bear and transmit loads, absorb energy, and accommodate growth, respiration, and/or locomotion. In this paper, an elastic deterministic fractal composite mechanical model was formulated to quantitatively investigate the role of structural hierarchy on the stiffness, strength, and failure of suture joints. From this model, it was revealed that the number of hierarchies (N) can be used to tailor and to amplify mechanical properties nonlinearly and with high sensitivity over a wide range of values (orders of magnitude) for a given volume and weight. Additionally, increasing hierarchy was found to result in mechanical interlocking of higher-order teeth, which creates additional load resistance capability, thereby preventing catastrophic failure in major teeth and providing flaw tolerance. Hence, this paper shows that the diversity of hierarchical and fractal-like interfaces and joints found in nature have definitive functional consequences and is an effective geometric-structural strategy to achieve different properties with limited material options in nature when other structural geometries and parameters are biologically challenging or inaccessible. This paper also indicates the use of hierarchy as a design strategy to increase design space and provides predictive capabilities to guide the mechanical design of synthetic flaw-tolerant bioinspired interfaces and joints.
Rock fracture characterization with GPR by means of deterministic deconvolution
NASA Astrophysics Data System (ADS)
Arosio, Diego
2016-03-01
In this work I address GPR characterization of rock fracture parameters, namely thickness and filling material. Rock fractures can generally be considered as thin beds, i.e., two interfaces whose separation is smaller than the resolution limit dictated by the Rayleigh's criterion. The analysis of the amplitude of the thin bed response in the time domain might permit to estimate fracture features for arbitrarily thin beds, but it is difficult to achieve and could be applied only to favorable cases (i.e., when all factors affecting amplitude are identified and corrected for). Here I explore the possibility to estimate fracture thickness and filling in the frequency domain by means of GPR. After introducing some theoretical aspects of thin bed response, I simulate GPR data on sandstone blocks with air- and water-filled fractures of known thickness. On the basis of some simplifying assumptions, I propose a 4-step procedure in which deterministic deconvolution is used to retrieve the magnitude and phase of the thin bed response in the selected frequency band. After deconvolved curves are obtained, fracture thickness and filling are estimated by means of a fitting process, which presents higher sensitivity to fracture thickness. Results are encouraging and suggest that GPR could be a fast and effective tool to determine fracture parameters in non-destructive manner. Further GPR experiments in the lab are needed to test the proposed processing sequence and to validate the results obtained so far.
Neo-deterministic seismic hazard assessment in North Africa
NASA Astrophysics Data System (ADS)
Mourabit, T.; Abou Elenean, K. M.; Ayadi, A.; Benouar, D.; Ben Suleman, A.; Bezzeghoud, M.; Cheddadi, A.; Chourak, M.; ElGabry, M. N.; Harbi, A.; Hfaiedh, M.; Hussein, H. M.; Kacem, J.; Ksentini, A.; Jabour, N.; Magrin, A.; Maouche, S.; Meghraoui, M.; Ousadou, F.; Panza, G. F.; Peresan, A.; Romdhane, N.; Vaccari, F.; Zuccolo, E.
2014-04-01
North Africa is one of the most earthquake-prone areas of the Mediterranean. Many devastating earthquakes, some of them tsunami-triggering, inflicted heavy loss of life and considerable economic damage to the region. In order to mitigate the destructive impact of the earthquakes, the regional seismic hazard in North Africa is assessed using the neo-deterministic, multi-scenario methodology (NDSHA) based on the computation of synthetic seismograms, using the modal summation technique, at a regular grid of 0.2 × 0.2°. This is the first study aimed at producing NDSHA maps of North Africa including five countries: Morocco, Algeria, Tunisia, Libya, and Egypt. The key input data for the NDSHA algorithm are earthquake sources, seismotectonic zonation, and structural models. In the preparation of the input data, it has been really important to go beyond the national borders and to adopt a coherent strategy all over the area. Thanks to the collaborative efforts of the teams involved, it has been possible to properly merge the earthquake catalogues available for each country to define with homogeneous criteria the seismogenic zones, the characteristic focal mechanism associated with each of them, and the structural models used to model wave propagation from the sources to the sites. As a result, reliable seismic hazard maps are produced in terms of maximum displacement ( D max), maximum velocity ( V max), and design ground acceleration.
Experimental evidence for deterministic chaos in thermal pulse combustion
Daw, C.S.; Thomas, J.F.; Richards, G.A.; Narayanaswami, L.L.
1994-12-31
Given the existence of chaotic oscillations in reacting chemical systems, it is reasonable to ask whether or not similar phenomena can occur in combustion. In this paper, the authors present experimental evidence that kinetically driven chaos occurs in a highly simplified thermal pulse combustor. The combustor is a well-stirred reactor with a tailpipe extending from one end. Fuel and air are injected into the combustion chamber through orifices in the end opposite the tailpipe. Propane with the fuel used in all cases. From the experimental data analyses, it is clear that deterministic chaos is an important factor in thermal pulse combustor dynamics. While the authors have only observed such behavior in this particular type combustor to date, they infer from their understanding of the origins of the chaos that it is likely to exist in other pulse combustors and even nonpulsing combustion. They speculate that realization of the importance of chaos in affecting flame stability could lead to significant changes in combustor design and control.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
NASA Technical Reports Server (NTRS)
Chin, Jeffrey C.; Csank, Jeffrey T.; Haller, William J.; Seidel, Jonathan A.
2016-01-01
This document outlines methodologies designed to improve the interface between the Numerical Propulsion System Simulation framework and various control and dynamic analyses developed in the Matlab and Simulink environment. Although NPSS is most commonly used for steady-state modeling, this paper is intended to supplement the relatively sparse documentation on it's transient analysis functionality. Matlab has become an extremely popular engineering environment, and better methodologies are necessary to develop tools that leverage the benefits of these disparate frameworks. Transient analysis is not a new feature of the Numerical Propulsion System Simulation (NPSS), but transient considerations are becoming more pertinent as multidisciplinary trade-offs begin to play a larger role in advanced engine designs. This paper serves to supplement the relatively sparse documentation on transient modeling and cover the budding convergence between NPSS and Matlab based modeling toolsets. The following sections explore various design patterns to rapidly develop transient models. Each approach starts with a base model built with NPSS, and assumes the reader already has a basic understanding of how to construct a steady-state model. The second half of the paper focuses on further enhancements required to subsequently interface NPSS with Matlab codes. The first method being the simplest and most straightforward but performance constrained, and the last being the most abstract. These methods aren't mutually exclusive and the specific implementation details could vary greatly based on the designer's discretion. Basic recommendations are provided to organize model logic in a format most easily amenable to integration with existing Matlab control toolsets.
Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations
NASA Astrophysics Data System (ADS)
Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.
2016-02-01
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.
Bulgakov, N G; Maksimov, V N
2005-01-01
Specific application of deterministic analysis to investigate the contingencies of various components of natural biocenosis was illustrated by the example of fish production and biomass of phyto- and zooplankton. Deterministic analysis confirms the theoretic assumptions on food preferences of herbivorous fish: both silver and bighead carps avoided feeding on cyanobacteria. Being a facultative phytoplankton feeder, silver carp preferred microalgae to zooplankton. Deterministic analysis allowed us to demonstrate the contingency of the mean biomass of phyto- and zooplankton during both the whole fish production cycle and the individual periods. PMID:16004266
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
Andreu-Perez, Javier; Solnais, Celine; Sriskandarajah, Kumuthan
2016-01-01
Recent advances in the reliability of the eye-tracking methodology as well as the increasing availability of affordable non-intrusive technology have opened the door to new research opportunities in a variety of areas and applications. This has raised increasing interest within disciplines such as medicine, business and education for analysing human perceptual and psychological processes based on eye-tracking data. However, most of the currently available software requires programming skills and focuses on the analysis of a limited set of eye-movement measures (e.g., saccades and fixations), thus excluding other measures of interest to the classification of a determined state or condition. This paper describes 'EALab', a MATLAB toolbox aimed at easing the extraction, multivariate analysis and classification stages of eye-activity data collected from commercial and independent eye trackers. The processing implemented in this toolbox enables to evaluate variables extracted from a wide range of measures including saccades, fixations, blinks, pupil diameter and glissades. Using EALab does not require any programming and the analysis can be performed through a user-friendly graphical user interface (GUI) consisting of three processing modules: 1) eye-activity measure extraction interface, 2) variable selection and analysis interface, and 3) classification interface. PMID:26358034
"Eztrack": A single-vehicle deterministic tracking algorithm
Carrano, C J
2007-12-20
A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.
Deterministic precision finishing of domes and conformal optics
NASA Astrophysics Data System (ADS)
Shorey, Aric; Kordonski, William; Tricard, Marc
2005-05-01
In order to enhance missile performance, future window and dome designs will incorporate shapes with improved aerodynamic performance compared with the more traditional flats and spheres. Due to their constantly changing curvature and steep slopes, these shapes are incompatible with most conventional polishing and metrology solutions. Two types of a novel polishing technology, Magnetorheological Finishing (MRF®) and Magnetorheological (MR) Jet, could enable cost-effective manufacturing of free-form optical surfaces. MRF, a deterministic sub-aperture magnetically assisted polishing method, has been developed to overcome many of the fundamental limitations of traditional finishing. MRF has demonstrated the ability to produce complex optical surfaces with accuracies better than 30 nm peak-to-valley (PV) and surface micro-roughness less than 1 nm rms on a wide variety of optical glasses, single crystals, and glass-ceramics. The polishing tool in MRF perfectly conforms to the optical surface making it well suited for finishing this class of optics. A newly developed magnetically assisted finishing method MR JetTM, addresses the challenge of finishing the inside of steep concave domes and other irregular shapes. An applied magnetic field coupled with the properties of the MR fluid allow for stable removal rate with stand-off distances of tens of centimeters. Surface figure and roughness values similar to traditional MRF have been demonstrated. Combining these technologies with metrology techniques, such as Sub-aperture Stitching Interferometer (SSI®) and Asphere Stitching Interferometer (ASI®), enable higher precision finishing of the windows and domes today, as well as the finishing of future conformal designs.
Deterministic folding: The role of entropic forces and steric specificities
NASA Astrophysics Data System (ADS)
da Silva, Roosevelt A.; da Silva, M. A. A.; Caliri, A.
2001-03-01
The inverse folding problem of proteinlike macromolecules is studied by using a lattice Monte Carlo (MC) model in which steric specificities (nearest-neighbors constraints) are included and the hydrophobic effect is treated explicitly by considering interactions between the chain and solvent molecules. Chemical attributes and steric peculiarities of the residues are encoded in a 10-letter alphabet and a correspondent "syntax" is provided in order to write suitable sequences for the specified target structures; twenty-four target configurations, chosen in order to cover all possible values of the average contact order χ (0.2381⩽χ⩽0.4947 for this system), were encoded and analyzed. The results, obtained by MC simulations, are strongly influenced by geometrical properties of the native configuration, namely χ and the relative number φ of crankshafts-type structures: For χ<0.35 the folding is deterministic, that is, the syntax is able to encode successful sequences: The system presents larger encodability, minimum sequence-target degeneracies and smaller characteristic folding time τf. For χ⩾0.35 the above results are not reproduced any more: The folding success is severely reduced, showing strong correlation with φ. Additionally, the existence of distinct characteristic folding times suggests that different mechanisms are acting at the same time in the folding process. The results (all obtained from the same single model, under the same "physiological conditions") resemble some general features of the folding problem, supporting the premise that the steric specificities, in association with the entropic forces (hydrophobic effect), are basic ingredients in the protein folding process.
Deterministic and Stochastic Descriptions of Gene Expression Dynamics
NASA Astrophysics Data System (ADS)
Marathe, Rahul; Bierbaum, Veronika; Gomez, David; Klumpp, Stefan
2012-09-01
A key goal of systems biology is the predictive mathematical description of gene regulatory circuits. Different approaches are used such as deterministic and stochastic models, models that describe cell growth and division explicitly or implicitly etc. Here we consider simple systems of unregulated (constitutive) gene expression and compare different mathematical descriptions systematically to obtain insight into the errors that are introduced by various common approximations such as describing cell growth and division by an effective protein degradation term. In particular, we show that the population average of protein content of a cell exhibits a subtle dependence on the dynamics of growth and division, the specific model for volume growth and the age structure of the population. Nevertheless, the error made by models with implicit cell growth and division is quite small. Furthermore, we compare various models that are partially stochastic to investigate the impact of different sources of (intrinsic) noise. This comparison indicates that different sources of noise (protein synthesis, partitioning in cell division) contribute comparable amounts of noise if protein synthesis is not or only weakly bursty. If protein synthesis is very bursty, the burstiness is the dominant noise source, independent of other details of the model. Finally, we discuss two sources of extrinsic noise: cell-to-cell variations in protein content due to cells being at different stages in the division cycles, which we show to be small (for the protein concentration and, surprisingly, also for the protein copy number per cell) and fluctuations in the growth rate, which can have a significant impact.
Accurate deterministic solutions for the classic Boltzmann shock profile
NASA Astrophysics Data System (ADS)
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
Merging deterministic and probabilistic approaches to forecast volcanic scenarios
NASA Astrophysics Data System (ADS)
Peruzzo, E.; Bisconti, L.; Barsanti, M.; Flandoli, F.; Papale, P.
2009-04-01
Volcanoes are extremely complex systems largely inaccessible to direct observation. As a consequence, many quantities which are relevant in determining the physical and chemical processes occurring at volcanoes are largely uncertain. On the other hand, the demand for eruption scenario forecast at many hazardous volcanoes in the world is pressing, reflecting into the development and use of increasingly complex physical models and numerical codes. Such codes are capable of accounting for the extremely complex, non-linear behaviour of the volcanic processes, and for the roles of several quantities in determining volcanic scenarios and hazards. However, they often require enormous computer resources and imply long (order of days to weeks) CPU times even on the most advanced parallel computation systems available to-date. As a consequence, they can hardly be used to reasonably cover the spectrum of possible conditions expected at a given volcano. At this purpose, we have started the development of a mixed deterministic-probabilistic approach with the aim of substantially reducing (form order 10000 to 10) the number of simulations needed to adequately represent possible scenarios and their probability of occurrence, corresponding to a given set of probability distributions for the initial/boundary conditions characterizing the system. The core of the problem is to find a "best" discretization of the continuous density function describing the random variables input to the model. This is done through the stochastic quantization theory (Graf and Luschgy, 2000). The application of this theory to volcanic scenario forecast has been tested through both an oversimplified analytical model and a more complex numerical model for magma flow in volcanic conduits, the latter still running in relatively short times to allow comparison with Monte Carlo simulations. The final aim is to define proper strategies and paradigms for application to more complex, time-demanding codes
ShareSync: A Solution for Deterministic Data Sharing over Ethernet
NASA Technical Reports Server (NTRS)
Dunn, Daniel J., II; Koons, William A.; Kennedy, Richard D.; Davis, Philip A.
2007-01-01
As part of upgrading the Contact Dynamics Simulation Laboratory (CDSL) at the NASA Marshall Space Flight Center (MSFC), a simple, cost effective method was needed to communicate data among the networked simulation machines and I/O controllers used to run the facility. To fill this need and similar applicable situations, a generic protocol was developed, called ShareSync. ShareSync is a lightweight, real-time, publish-subscribe Ethernet protocol for simple and deterministic data sharing across diverse machines and operating systems. ShareSync provides a simple Application Programming Interface (API) for simulation programmers to incorporate into their code. The protocol is compatible with virtually all Ethernet-capable machines, is flexible enough to support a variety of applications, is fast enough to provide soft real-time determinism, and is a low-cost resource for distributed simulation development, deployment, and maintenance. The first design cycle iteration of ShareSync has been completed, and the protocol has undergone several testing procedures including endurance and benchmarking tests and approaches the 2001ts data synchronization design goal for the CDSL.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
NASA Astrophysics Data System (ADS)
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Liu, P; Li, C-Y; Wang, C-N; Sun, J-F; Min, J; Hu, D; Wu, Y-N
2011-05-01
This paper compares the exposure for the Chinese populations and sub-groups to acephate, a widely applied organophosphorus pesticide, using deterministic and probabilistic approaches. Acephate residue data were obtained from the national food contamination monitoring program 2001-2006, collected by multi-stage stratified sampling and with a detection rate of 3.3%. Food consumption data were gathered from the national diet and nutrition survey conducted in 2002 over three consecutive days by the 24-h recall method, and included 22,563 families or 65,886 consumers aged 2-100 years. For point estimate, it was evident that exposures were higher than the acute reference dose (ARfD) in many cases. For the probabilistic approach, the P99.9 exposures for the general population and children accounted for 11.88 and 24.15% of the ARfD, respectively, in acute intake calculations and 52.86 and 68.75%, respectively, of the acceptable daily intake (ADI) in chronic intake calculations. The exposure level of rural people was higher than urban dwellers, and vegetables contributed most to acephate intake. PMID:21598144
A Simulation Program for Dynamic Infrared (IR) Spectra
ERIC Educational Resources Information Center
Zoerb, Matthew C.; Harris, Charles B.
2013-01-01
A free program for the simulation of dynamic infrared (IR) spectra is presented. The program simulates the spectrum of two exchanging IR peaks based on simple input parameters. Larger systems can be simulated with minor modifications. The program is available as an executable program for PCs or can be run in MATLAB on any operating system. Source…
Saulnier, Dell D.; Persson, Lars-Åke; Streatfield, Peter Kim; Faruque, A. S. G.; Rahman, Anisur
2016-01-01
Background Cholera outbreaks are a continuing problem in Bangladesh, and the timely detection of an outbreak is important for reducing morbidity and mortality. In Matlab, the ongoing Health and Demographic Surveillance System (HDSS) data records symptoms of diarrhea in children under the age of 5 years at the community level. Cholera surveillance in Matlab currently uses hospital-based data. Objective The objective of this study is to determine whether increases in cholera in Matlab can be detected earlier by using HDSS diarrhea symptom data in a syndromic surveillance analysis, when compared to hospital admissions for cholera. Methods HDSS diarrhea symptom data and hospital admissions for cholera in children under 5 years of age over a 2-year period were analyzed with the syndromic surveillance statistical program EARS (Early Aberration Reporting System). Dates when significant increases in either symptoms or cholera cases occurred were compared to one another. Results The analysis revealed that there were 43 days over 16 months when the cholera cases or diarrhea symptoms increased significantly. There were 8 months when both data sets detected days with significant increases. In 5 of the 8 months, increases in diarrheal symptoms occurred before increases of cholera cases. The increases in symptoms occurred between 1 and 15 days before the increases in cholera cases. Conclusions The results suggest that the HDSS survey data may be able to detect an increase in cholera before an increase in hospital admissions is seen. However, there was no direct link between diarrheal symptom increases and cholera cases, and this, as well as other methodological weaknesses, should be taken into consideration. PMID:27193264
Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region
Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.
2008-07-08
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.
A deterministic and statistical energy analysis of tyre cavity resonance noise
NASA Astrophysics Data System (ADS)
Mohamed, Zamri; Wang, Xu
2016-03-01
Tyre cavity resonance was studied using a combination of deterministic analysis and statistical energy analysis where its deterministic part was implemented using the impedance compact mobility matrix method and its statistical part was done by the statistical energy analysis method. While the impedance compact mobility matrix method can offer a deterministic solution to the cavity pressure response and the compliant wall vibration velocity response in the low frequency range, the statistical energy analysis method can offer a statistical solution of the responses in the high frequency range. In the mid frequency range, a combination of the statistical energy analysis and deterministic analysis methods can identify system coupling characteristics. Both methods have been compared to those from commercial softwares in order to validate the results. The combined analysis result has been verified by the measurement result from a tyre-cavity physical model. The analysis method developed in this study can be applied to other similar toroidal shape structural-acoustic systems.
Individual-based vs deterministic models for macroparasites: host cycles and extinction.
Rosà, Roberto; Pugliese, Andrea; Villani, Alessandro; Rizzoli, Annapaola
2003-06-01
Our understanding of the qualitative dynamics of host-macroparasite systems is mainly based on deterministic models. We study here an individual-based stochastic model that incorporates the same assumptions as the classical deterministic model. Stochastic simulations, using parameter values based on some case studies, preserve many features of the deterministic model, like the average value of the variables and the approximate length of the cycles. An important difference is that, even when deterministic models yield damped oscillations, stochastic simulations yield apparently sustained oscillations. The amplitude of such oscillations may be so large to threaten parasites' persistence.With density-dependence in parasite demographic traits, persistence increases somewhat. Allowing instead for infections from an external parasite reservoir, we found that host extinction may easily occur. However, the extinction probability is almost independent of the level of external infection over a wide intermediate parameter region. PMID:12742175
Analysis of the deterministic and stochastic SIRS epidemic models with nonlinear incidence
NASA Astrophysics Data System (ADS)
Liu, Qun; Chen, Qingmei
2015-06-01
In this paper, the deterministic and stochastic SIRS epidemic models with nonlinear incidence are introduced and investigated. For deterministic system, the basic reproductive number R0 is obtained. Furthermore, if R0 ≤ 1, then the disease-free equilibrium is globally asymptotically stable and if R0 > 1, then there is a unique endemic equilibrium which is globally asymptotically stable. For stochastic system, to begin with, we verify that there is a unique global positive solution starting from the positive initial value. Then when R0 > 1, we prove that stochastic perturbations may lead the disease to extinction in scenarios where the deterministic system is persistent. When R0 ≤ 1, a result on fluctuation of the solution around the disease-free equilibrium of deterministic model is obtained under appropriate conditions. At last, if the intensity of the white noise is sufficiently small and R0 > 1, then there is a unique stationary distribution to stochastic system.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.
2016-04-01
Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition
Phase conjugation with random fields and with deterministic and random scatterers
Gbur, G.; Wolf, E.
1999-01-01
The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. {copyright} {ital 1999} {ital Optical Society of America}
Yildirim, Necmettin; Kazanci, Caner
2011-01-01
A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. PMID:21187231
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Image enhancement using MCNP5 code and MATLAB in neutron radiography.
Tharwat, Montaser; Mohamed, Nader; Mongy, T
2014-07-01
This work presents a method that can be used to enhance the neutron radiography (NR) image for objects with high scattering materials like hydrogen, carbon and other light materials. This method used Monte Carlo code, MCNP5, to simulate the NR process and get the flux distribution for each pixel of the image and determines the scattered neutron distribution that caused image blur, and then uses MATLAB to subtract this scattered neutron distribution from the initial image to improve its quality. This work was performed before the commissioning of digital NR system in Jan. 2013. The MATLAB enhancement method is quite a good technique in the case of static based film neutron radiography, while in neutron imaging (NI) technique, image enhancement and quantitative measurement were efficient by using ImageJ software. The enhanced image quality and quantitative measurements were presented in this work. PMID:24583508
Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki
2014-09-01
Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction. PMID:24623466
Predicting the performance of local seismic networks using Matlab and Google Earth.
Chael, Eric Paul
2009-11-01
We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number of stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.
Modeling and simulation of hydraulic vibration system based on bond graph and Matlab/Simulink
NASA Astrophysics Data System (ADS)
Lian, Hongzhen; Kou, Ziming
2008-10-01
The hydraulic vibration system controlled by wave exciter is a mechanic-electric-fluid integration system, and it has high dynamic characteristics. Modeling and simulation for it has come to professional's attention in the field of hydraulic vibration industry, because it is nonlinear and complex. In this paper, a method has been proposed. By using power bond graph method, the bond graph model for it can be established, meanwhile, it is proposed that controlled parameters are considered to join the model, in order to control power flow alternated; and the mathematical model(state equations) of this system can be built according to bond graph theory and controlled relations; then simulation model can be built by using Matlab/Simulink software, the model can intuitively express system's power flow direction and controlled relations. To the question that stiff equation appears easily in model of hydraulic system, we can choose the adapting algorithm offered by Matlab software to obtain the more precise simulation results.
Gro2mat: a package to efficiently read gromacs output in MATLAB.
Dien, Hung; Deane, Charlotte M; Knapp, Bernhard
2014-07-30
Molecular dynamics (MD) simulations are a state-of-the-art computational method used to investigate molecular interactions at atomic scale. Interaction processes out of experimental reach can be monitored using MD software, such as Gromacs. Here, we present the gro2mat package that allows fast and easy access to Gromacs output files from Matlab. Gro2mat enables direct parsing of the most common Gromacs output formats including the binary xtc-format. No openly available Matlab parser currently exists for this format. The xtc reader is orders of magnitudes faster than other available pdb/ascii workarounds. Gro2mat is especially useful for scientists with an interest in quick prototyping of new mathematical and statistical approaches for Gromacs trajectory analyses. © 2014 Wiley Periodicals, Inc. PMID:24920464
NASA Astrophysics Data System (ADS)
Sharp, J. S.; Glover, P. M.; Moseley, W.
2007-05-01
In this paper we describe the recent changes to the curriculum of the second year practical laboratory course in the School of Physics and Astronomy at the University of Nottingham. In particular, we describe how Matlab has been implemented as a teaching tool and discuss both its pedagogical advantages and disadvantages in teaching undergraduate students about computer interfacing and instrument control techniques. We also discuss the motivation for converting the interfacing language that is used in the laboratory from LabView to Matlab. We describe an example of a typical experiment the students are required to complete and we conclude by briefly assessing how the recent curriculum changes have affected both student performance and compliance.
RenderToolbox3: MATLAB tools that facilitate physically based stimulus rendering for vision research
Heasly, Benjamin S.; Cottaris, Nicolas P.; Lichtman, Daniel P.; Xiao, Bei; Brainard, David H.
2014-01-01
RenderToolbox3 provides MATLAB utilities and prescribes a workflow that should be useful to researchers who want to employ graphics in the study of vision and perhaps in other endeavors as well. In particular, RenderToolbox3 facilitates rendering scene families in which various scene attributes and renderer behaviors are manipulated parametrically, enables spectral specification of object reflectance and illuminant spectra, enables the use of physically based material specifications, helps validate renderer output, and converts renderer output to physical units of radiance. This paper describes the design and functionality of the toolbox and discusses several examples that demonstrate its use. We have designed RenderToolbox3 to be portable across computer hardware and operating systems and to be free and open source (except for MATLAB itself). RenderToolbox3 is available at https://github.com/DavidBrainard/RenderToolbox3. PMID:24511145
Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects
NASA Astrophysics Data System (ADS)
Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume
The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
Design of a Library of Components for Autonomous Photovoltaic System under Matlab/Simulink
NASA Astrophysics Data System (ADS)
Chermitti, A.; Boukli-Hacene, O.; Meghebbar, A.; Bibitriki, N.; Kherous, A.
This paper presents a library of components for PV systems under Matlab/Simulink, named "PV Systems Toolbox". This toolbox allows analyzing the behavior of a PV system. It also estimates the power produced by the PV generator according to changes in climatic conditions and the nature of the load. An accurate model of the PV generator is presented based on the equation of the Shockley diode. A simple simulation example is given using a typical 60W PV module.
POST II Trajectory Animation Tool Using MATLAB, V1.0
NASA Technical Reports Server (NTRS)
Raiszadeh, Behzad
2005-01-01
A trajectory animation tool has been developed for accurately depicting position and the attitude of the bodies in flight. The movies generated from This MATLAB based tool serve as an engineering analysis aid to gain further understanding into the dynamic behavior of bodies in flight. This tool has been designed to interface with the output generated from POST II simulations, and is able to animate a single as well as multiple vehicles in flight.
Poblano v1.0 : a Matlab toolbox for gradient-based optimization.
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
2010-03-01
We present Poblano v1.0, a Matlab toolbox for solving gradient-based unconstrained optimization problems. Poblano implements three optimization methods (nonlinear conjugate gradients, limited-memory BFGS, and truncated Newton) that require only first order derivative information. In this paper, we describe the Poblano methods, provide numerous examples on how to use Poblano, and present results of Poblano used in solving problems from a standard test collection of unconstrained optimization problems.
A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.
Bader, Brett William; Kolda, Tamara Gibson
2004-07-01
We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.
NASA Astrophysics Data System (ADS)
Błażejewski, Ryszard; Murat-Błażejewska, Sadżide; Jędrkowiak, Martyna
2014-09-01
The paper presents a water balance of a flow-through, dammed lake, consisted of the following terms: surface inflow, underground inflow/outflow based on the Dupuit's equation, precipitation on the lake surface, evaporation from water surface and outflow from the lake at which a damming weir is located. The balance equation was implemented Matlab-Simulink®. Applicability of the model was assessed on the example of the Sławianowskie Lake of surface area 276 ha and mean depth - 6.6 m, Water balances, performed for month time intervals in the hydrological year 2009, showed good agreement for the first three months only. It is concluded that the balancing time interval should be shorter (1 day) to minimize the errors. For calibration purposes, measurements of ground water levels in the vicinity of the lake are also recommended. Praca przedstawia bilans wodny przepływowego piętrzonego jeziora, uwzględniający dopływ powierzchniowy, dopływ i odpływ podziemny opisany równaniem Dupuita, opad na powierzchnię jeziora, parowanie z powierzchni wody oraz odpływ w przekroju zamkniętym jazem piętrzącym. Z uwagi na nieliniowe związki wymienionych składników bilansu z poziomem wody w jeziorze, do obliczeń wykorzystano program komuterowy Matlab-Simulink®. Przydatność modelu sprawdzono na przykładzie Jeziora Sławianowskiego o powierzchni 276 ha i średniej głębokości - 6,6 m. Jezioro to zostało podzielone na dwa akweny o zróżnicowanej głębokości. Wyniki obliczeń miesięcznych bilansów wodnych dla roku hydrologicznego 2009, wykazały dobrą zgodność z pomiarami jedynie dla trzech pierwszych miesięcy. Stwierdzono, że dla zmniejszenia błędów obliczeniowych należałoby skrócić interwał bilansowania do jednej doby. Kalibracja modelu byłaby łatwiejsza i bardziej adekwatna, gdyby do oszacowania przewodności hydraulicznej przyległych do jeziora gruntów i osadów dennych wykorzystać badania poziomów wody w piezometrach, zlokalizowanych w
A MATLAB function for 3-D and 4-D topographical visualization in geosciences
NASA Astrophysics Data System (ADS)
Zekollari, Harry
2016-04-01
Combining topographical information and spatially varying variables in visualizations is often crucial and inherent to geoscientific problems. Despite this, it is often an impossible or a very time-consuming and difficult task to create such figures by using classic software packages. This is also the case in the widely used numerical computing environment MATLAB. Here a MATLAB function is introduced for plotting a variety of natural environments with a pronounced topography, such as for instance glaciers, volcanoes and lakes in mountainous regions. Landscapes can be visualized in 3-D, with a single colour defining a featured surface type (e.g. ice, snow, water, lava), or with a colour scale defining the magnitude of a variable (e.g. ice thickness, snow depth, water depth, surface velocity, gradient, elevation). As an input only the elevation of the subsurface (typically the bedrock) and the surface are needed, which can be complemented by various input parameters in order to adapt the figure to specific needs. The figures are particularly suited to make time-evolving animations of natural processes, such as for instance a glacier retreat or a lake drainage event. Several visualization examples will be provided alongside with animations. The function, which is freely available for download, only requires the basic package of MATLAB and can be run on any standard stationary or portable personal computer.
Time-stepping methods for the simulation of the self-assembly of nano-crystals in MATLAB on a GPU
NASA Astrophysics Data System (ADS)
Korzec, M. D.; Ahnert, T.
2013-10-01
Partial differential equations describing the patterning of thin crystalline films are typically of fourth or sixth order, they are quasi- or semilinear and they are mostly defined on simple geometries such as rectangular domains. For the numerical simulation of these kinds of problems spectral methods are an efficient approach. We apply several implicit-explicit schemes to one recently derived PDE that we express in terms of coefficients of trigonometric interpolants. While the simplest IMEX scheme turns out to have the mildest step-size restriction, higher order SBDF schemes tend to be more unstable and exponential time integrators are fastest for the calculation of very accurate solutions. We implemented a reduced model in the EXPINT package syntax [3] and compared various exponential schemes. A convexity splitting approach was employed to stabilize the SBDF1 scheme. We show that accuracy control is crucial when using this idea, therefore we present a time-adaptive SBDF1/SBDF1-2-step method that yields convincing results reflecting the change in timescales during topological changes of the nanostructures. The implementation of all presented methods is carried out in MATLAB. We used the open source GPUmat package to gain up to 5-fold runtime benefits by carrying out calculations on a low-cost GPU without having to prescribe any knowledge in low-level programming or CUDA implementations and found comparable speedups as with MATLAB's PCT or with GPUmat run on Octave.
NASA Astrophysics Data System (ADS)
Szymanowski, Mariusz; Kryza, Maciej
2015-11-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
NASA Astrophysics Data System (ADS)
Caplan, R. M.
2013-04-01
We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time
Improve Problem Solving Skills through Adapting Programming Tools
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.
2006-01-01
This report provides a user guide for the Compressible Flow Toolbox, a collection of algorithms that solve almost 300 linear and nonlinear classical compressible flow relations. The algorithms, implemented in the popular MATLAB programming language, are useful for analysis of one-dimensional steady flow with constant entropy, friction, heat transfer, or shock discontinuities. The solutions do not include any gas dissociative effects. The toolbox also contains functions for comparing and validating the equation-solving algorithms against solutions previously published in the open literature. The classical equations solved by the Compressible Flow Toolbox are: isentropic-flow equations, Fanno flow equations (pertaining to flow of an ideal gas in a pipe with friction), Rayleigh flow equations (pertaining to frictionless flow of an ideal gas, with heat transfer, in a pipe of constant cross section.), normal-shock equations, oblique-shock equations, and Prandtl-Meyer expansion equations. At the time this report was published, the Compressible Flow Toolbox was available without cost from the NASA Software Repository.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Kamboj, Sunita; Cheng, Jing-Jy; Yu, Charley
2005-05-01
The dose assessments for sites containing residual radioactivity usually involve the use of computer models that employ input parameters describing the physical conditions of the contaminated and surrounding media and the living and consumption patterns of the receptors in analyzing potential doses to the receptors. The precision of the dose results depends on the precision of the input parameter values. The identification of sensitive parameters that have great influence on the dose results would help set priorities in research and information gathering for parameter values so that a more precise dose assessment can be conducted. Two methods of identifying site-specific sensitive parameters, deterministic and probabilistic, were compared by applying them to the RESRAD computer code for analyzing radiation exposure for a residential farmer scenario. The deterministic method has difficulty in evaluating the effect of simultaneous changes in a large number of input parameters on the model output results. The probabilistic method easily identified the most sensitive parameters, but the sensitivity measure of other parameters was obscured. The choice of sensitivity analysis method would depend on the availability of site-specific data. Generally speaking, the deterministic method would identify the same set of sensitive parameters as the probabilistic method when 1) the baseline values used in the deterministic method were selected near the mean or median value of each parameter and 2) the selected range of parameter values used in the deterministic method was wide enough to cover the 5th to 95th percentile values from the distribution of that parameter. PMID:15824576
Assessment of stochastic and deterministic models of 6304 quasar lightcurves from SDSS Stripe 82
NASA Astrophysics Data System (ADS)
Andrae, R.; Kim, D.-W.; Bailer-Jones, C. A. L.
2013-06-01
The optical lightcurves of many quasars show variations of tenths of a magnitude or more on timescales of months to years. This variation often cannot be described well by a simple deterministic model. We perform a Bayesian comparison of over 20 deterministic and stochastic models on 6304 quasi-steller object (QSO) lightcurves in SDSS Stripe 82. We include the damped random walk (or Ornstein-Uhlenbeck [OU] process), a particular type of stochastic model, which recent studies have focused on. Further models we consider are single and double sinusoids, multiple OU processes, higher order continuous autoregressive processes, and composite models. We find that only 29 out of 6304 QSO lightcurves are described significantly better by a deterministic model than a stochastic one. The OU process is an adequate description of the vast majority of cases (6023). Indeed, the OU process is the best single model for 3462 lightcurves, with the composite OU process/sinusoid model being the best in 1706 cases. The latter model is the dominant one for brighter/bluer QSOs. Furthermore, a non-negligible fraction of QSO lightcurves show evidence that not only the mean is stochastic but the variance is stochastic, too. Our results confirm earlier work that QSO lightcurves can be described with a stochastic model, but place this on a firmer footing, and further show that the OU process is preferred over several other stochastic and deterministic models. Of course, there may well exist yet better (deterministic or stochastic) models, which have not been considered here.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration
NASA Technical Reports Server (NTRS)
Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce
2008-01-01
Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.
NASA Astrophysics Data System (ADS)
Chen, LiBing; Lu, Hong
2015-03-01
We show how a remote positive operator valued measurement (POVM) can be implemented deterministically by using partially entangled state(s). Firstly, we present a theoretical scheme for implementing deterministically a remote and controlled POVM onto any one of N qubits via a partially entangled ( N + 1)-qubit Greenberger-Horne-Zeilinger (GHZ) state, in which ( N - 1) administrators are included. Then, we design another scheme for implementing deterministically a POVM onto N remote qubits via N partially entangled qubit pairs. Our schemes have been designed for obtaining the optimal success probabilities: i.e. they are identical to those in the ordinary, local, POVMs. In these schemes, the POVM dictates the amount of entanglement needed. The fact that such overall treatment can save quantum resources is notable.
NASA Astrophysics Data System (ADS)
Daly, Peter M.; Hebenstreit, Gerald T.
2003-04-01
Deterministic source localization using matched-field processing (MFP) has yielded good results in propagation scenarios where the nonrandom model parameter input assumption is valid. In many shallow water environments, inputs to acoustic propagation models may be better represented using random distributions rather than fixed quantities. One can estimate the negative effect of random source inputs on deterministic MFP by (1) obtaining a realistic statistical representation of a signal model parameter, then (2) using the mean of the parameter as input to the MFP signal model (the so-called ``replica vector''), (3) synthesizing a source signal using multiple realizations of the random parameter, and (4) estimating the source localization error by correlating the synthesized signal vector with the replica vector over a three dimensional space. This approach allows one to quantify deterministic localization error introduced by random model parameters, including sound velocity profile, hydrophone locations, and sediment thickness and speed. [Work supported by DARPA Advanced Technology Office.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons.
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
NASA Astrophysics Data System (ADS)
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-02-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
NASA Astrophysics Data System (ADS)
Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.
2016-02-01
We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.
Kutkov, V; Buglova, E; McKenna, T
2011-06-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material. PMID:21617296
Traffic-light boundary in the deterministic Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Jia, Ning; Ma, Shoufeng
2011-06-01
The characteristics of the deterministic Nagel-Schreckenberg model with traffic-light boundary conditions are investigated and elucidated in a mostly theoretically way. First, precise analytical results of the outflow are obtained for cases in which the duration of the red phase is longer than one step. Then, some results are found and studied for cases in which the red phase equals one step. The main findings include the following. The maximum outflow is “road-length related” if the inflow is saturated; otherwise, if the inbound cars are generated stochastically, multiple theoretical outflow volumes may exist. The findings indicate that although the traffic-light boundary can be implemented in a simple and deterministic manner, the deterministic Nagel-Schreckenberg model with such a boundary has some unique and interesting behaviors.
Upgrading Custom Simulink Library Components for Use in Newer Versions of Matlab
NASA Technical Reports Server (NTRS)
Stewart, Camiren L.
2014-01-01
The Spaceport Command and Control System (SCCS) at Kennedy Space Center (KSC) is a control system for monitoring and launching manned launch vehicles. Simulations of ground support equipment (GSE) and the launch vehicle systems are required throughout the life cycle of SCCS to test software, hardware, and procedures to train the launch team. The simulations of the GSE at the launch site in conjunction with off-line processing locations are developed using Simulink, a piece of Commercial Off-The-Shelf (COTS) software. The simulations that are built are then converted into code and ran in a simulation engine called Trick, a Government off-the-shelf (GOTS) piece of software developed by NASA. In the world of hardware and software, it is not uncommon to see the products that are utilized be upgraded and patched or eventually fade away into an obsolete status. In the case of SCCS simulation software, Matlab, a MathWorks product, has released a number of stable versions of Simulink since the deployment of the software on the Development Work Stations in the Linux environment (DWLs). The upgraded versions of Simulink has introduced a number of new tools and resources that, if utilized fully and correctly, will save time and resources during the overall development of the GSE simulation and its correlating documentation. Unfortunately, simply importing the already built simulations into the new Matlab environment will not suffice as it will produce results that may not be expected as they were in the version that is currently being utilized. Thus, an upgrade execution plan was developed and executed to fully upgrade the simulation environment to one of the latest versions of Matlab.
Deterministic LOCC transformation of three-qubit pure states and entanglement transfer
Tajima, Hiroyasu
2013-02-15
A necessary and sufficient condition of the possibility of a deterministic local operations and classical communication (LOCC) transformation of three-qubit pure states is given. The condition shows that the three-qubit pure states are a partially ordered set parametrized by five well-known entanglement parameters and a novel parameter; the five are the concurrences C{sub AB}, C{sub AC}, C{sub BC}, the tangle {tau}{sub ABC} and the fifth parameter J{sub 5} of Acin et al. (2000) Ref. [19], while the other new one is the entanglement charge Q{sub e}. The order of the partially ordered set is defined by the possibility of a deterministic LOCC transformation from a state to another state. In this sense, the present condition is an extension of Nielsen's work (Nielsen (1999) [14]) to three-qubit pure states. We also clarify the rules of transfer and dissipation of the entanglement which is caused by deterministic LOCC transformations. Moreover, the minimum number of times of measurements to reproduce an arbitrary deterministic LOCC transformation between three-qubit pure states is given. - Highlights: Black-Right-Pointing-Pointer We obtained a necessary and sufficient condition for deterministic LOCC of 3 qubits. Black-Right-Pointing-Pointer We clarified rules of entanglement flow caused by measurements. Black-Right-Pointing-Pointer We found a new parameter which is interpreted as 'Charge of Entanglement'. Black-Right-Pointing-Pointer We gave a set of entanglements which determines whether two states are LU-eq. or not. Black-Right-Pointing-Pointer Our approach to deterministic LOCC of 3 qubits may be applicable to N qubits.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.
2011-01-01
MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically
Modeling hydraulic regenerative hybrid vehicles using AMESim and Matlab/Simulink
NASA Astrophysics Data System (ADS)
Lynn, Alfred; Smid, Edzko; Eshraghi, Moji; Caldwell, Niall; Woody, Dan
2005-05-01
This paper presents the overview of the simulation modeling of a hydraulic system with regenerative braking used to improve vehicle emissions and fuel economy. Two simulation software packages were used together to enhance the simulation capability for fuel economy results and development of vehicle and hybrid control strategy. AMESim, a hydraulic simulation software package modeled the complex hydraulic circuit and component hardware and was interlinked with a Matlab/Simulink model of the vehicle, engine and the control strategy required to operate the vehicle and the hydraulic hybrid system through various North American and European drive cycles.
Modelling and Simulation Based on Matlab/Simulink: A Press Mechanism
NASA Astrophysics Data System (ADS)
Halicioglu, R.; Dulger, L. C.; Bozdana, A. T.
2014-03-01
In this study, design and kinematic analysis of a crank-slider mechanism for a crank press is studied. The crank-slider mechanism is the commonly applied one as direct and indirect drive alternatives in practice. Since inexpensiveness, flexibility and controllability are getting more and more important in many industrial applications especially in automotive industry, a crank press with servo actuator (servo crank press) is taken as an application. Design and kinematic analysis of representative mechanism is presented with geometrical analysis for the inverse kinematic of the mechanism by using desired motion concept of slider. The mechanism is modelled in MATLAB/Simulink platform. The simulation results are presented herein.
A novel method for simulation of brushless DC motor servo-control system based on MATLAB
NASA Astrophysics Data System (ADS)
Tao, Keyan; Yan, Yingmin
2006-11-01
This paper provides a research about the simulation of brush-less DC motor (BLDCM) servo control system. Based on the mathematical model of Brush-less DC motor (BLDCM), built the system simulation model with the MATLAB software. When the system model is made, the isolated functional blocks, such as BLDCM block, the rotor's position detection block, change-phase logic block etc. have been modeled. By the organic combination of these blocks, the model of BLDCM can be established easily. The reasonability and validity have been testified by the simulation results and this novel method offers a new thought way for designing and debugging actual motors.
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Liu, Shaorong; Wang, Shuo; Guo, Ying
2015-02-01
We present two deterministic entanglement purifications protocols for χ-type entangled states, resorting to multiple degrees of freedom. One protocol is implemented with the spatial entanglement to distill the maximally entangled states from the mixed states, resorting to some linear optical elements. Another one is implemented with the frequency entanglement for the purification. All the parties can jointly distill the maximally entangled states from the mixed states affected by the environmental noise during transmission. Both of the protocols can work in a deterministic way with the success probability 100 %, in principle. The derived features may make the protocols useful in the practical long-distance quantum communication.
Palmer, Tim N.; O’Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
NASA Astrophysics Data System (ADS)
Park, Junbo; Ralph, D. C.; Buhrman, R. A.
2013-12-01
We model 100 ps pulse switching dynamics of orthogonal spin transfer (OST) devices that employ an out-of-plane polarizer and an in-plane polarizer. Simulation results indicate that increasing the spin polarization ratio, CP = PIPP/POPP, results in deterministic switching of the free layer without over-rotation (360° rotation). By using spin torque asymmetry to realize an enhanced effective PIPP, we experimentally demonstrate this behavior in OST devices in parallel to anti-parallel switching. Modeling predicts that decreasing the effective demagnetization field can substantially reduce the minimum CP required to attain deterministic switching, while retaining low critical switching current, Ip ˜ 500 μA.
NASA Astrophysics Data System (ADS)
Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Gantz, L.; Don, Y.; Zieliński, M.; Gershoni, D.
2015-11-01
We use one single, few-picosecond-long, variably polarized laser pulse to deterministically write any selected spin state of a quantum dot confined dark exciton whose life and coherence time are six and five orders of magnitude longer than the laser pulse duration, respectively. The pulse is tuned to an absorption resonance of an excited dark exciton state, which acquires nonnegligible oscillator strength due to residual mixing with bright exciton states. We obtain a high-fidelity one-to-one mapping from any point on the Poincaré sphere of the pulse polarization to a corresponding point on the Bloch sphere of the spin of the deterministically photogenerated dark exciton.
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
Lee, D.
2012-07-01
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)