Science.gov

Sample records for deterministic matlab program

  1. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    NASA Astrophysics Data System (ADS)

    Asinari, Pietro

    2010-10-01

    .gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be

  2. MatLab Script and Functional Programming

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali

    2007-01-01

    MatLab Script and Functional Programming: MatLab is one of the most widely used very high level programming languages for scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. The MatLab seminar covers the functional and script programming aspect of MatLab language. Specific expectations are: a) Recognize MatLab commands, script and function. b) Create, and run a MatLab function. c) Read, recognize, and describe MatLab syntax. d) Recognize decisions, loops and matrix operators. e) Evaluate scope among multiple files, and multiple functions within a file. f) Declare, define and use scalar variables, vectors and matrices.

  3. Research of Hybrid Programming with C#.net and Matlab

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; An, Jian-Ping; Chen, Pan

    Several approaches of integrated programming between C# and Matlab are introduced in this paper, including using Matlab Engine, calling Matlab Workspace in C# Functions and using com component. How to implement these approaches by programming was also shown in this paper, then analyze the characteristic of these methods and give the range of use.

  4. Peer Learning in a MATLAB Programming Course

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon

    2016-11-01

    Three forms of research-based peer learning were implemented in the design of a MATLAB programming course for mechanical engineering undergraduate students. First, a peer learning program was initiated. These undergraduate peer learning leaders played two roles in the course, (I) they were in the classroom helping students' with their work, and, (II) they led optional two hour helps sessions outside of the class time. The second form of peer learning was implemented through the inclusion of a peer discussion period following in class clicker quizzes. The third form of peer learning had the students creating video project assignments and posting them on YouTube to explain course topics to their peers. Several other more informal techniques were used to encourage peer learning. Student feedback in the form of both instructor-designed survey responses and formal course evaluations (quantitative and narrative) will be presented. Finally, effectiveness will be measured by formal assessment, direct and indirect to these peer learning methods. This will include both academic data/grades and pre/post test scores. Overall, the course design and its inclusion of these peer learning techniques demonstrate effectiveness.

  5. MatLab Programming for Engineers Having No Formal Programming Knowledge

    NASA Technical Reports Server (NTRS)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    MatLab is one of the most widely used very high level programming languages for Scientific and engineering computations. It is very user-friendly and needs practically no formal programming knowledge. Presented here are MatLab programming aspects and not just the MatLab commands for scientists and engineers who do not have formal programming training and also have no significant time to spare for learning programming to solve their real world problems. Specifically provided are programs for visualization. Also, stated are the current limitations of the MatLab, which possibly can be taken care of by Mathworks Inc. in a future version to make MatLab more versatile.

  6. QUBIT4MATLAB V3.0: A program package for quantum information science and quantum optics for MATLAB

    NASA Astrophysics Data System (ADS)

    Tóth, Géza

    2008-09-01

    A program package for MATLAB is introduced that helps calculations in quantum information science and quantum optics. It has commands for the following operations: (i) Reordering the qudits of a quantum register, computing the reduced state of a quantum register. (ii) Defining important quantum states easily. (iii) Formatted input and output for quantum states and operators. (iv) Constructing operators acting on given qudits of a quantum register and constructing spin chain Hamiltonians. (v) Partial transposition, matrix realignment and other operations related to the detection of quantum entanglement. (vi) Generating random state vectors, random density matrices and random unitaries. Program summaryProgram title:QUBIT4MATLAB V3.0 Catalogue identifier:AEAZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAZ_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:5683 No. of bytes in distributed program, including test data, etc.: 37 061 Distribution format:tar.gz Programming language:MATLAB 6.5; runs also on Octave Computer:Any which supports MATLAB 6.5 Operating system:Any which supports MATLAB 6.5; e.g., Microsoft Windows XP, Linux Classification:4.15 Nature of problem: Subroutines helping calculations in quantum information science and quantum optics. Solution method: A program package, that is, a set of commands is provided for MATLAB. One can use these commands interactively or they can also be used within a program. Running time:10 seconds-1 minute

  7. MatLab program for precision calibration of optical tweezers

    NASA Astrophysics Data System (ADS)

    Tolić-Nørrelykke, Iva Marija; Berg-Sørensen, Kirstine; Flyvbjerg, Henrik

    2004-06-01

    Optical tweezers are used as force transducers in many types of experiments. The force they exert in a given experiment is known only after a calibration. Computer codes that calibrate optical tweezers with high precision and reliability in the ( x, y)-plane orthogonal to the laser beam axis were written in MatLab (MathWorks Inc.) and are presented here. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum. First, cross-talk between channels in 2D position measurements is tested for, and eliminated if detected. Then, the Lorentzian power spectrum that results from the Einstein-Ornstein-Uhlenbeck theory, is fitted to the low-frequency part of the experimental spectrum in order to obtain an initial guess for parameters to be fitted. Finally, a more complete theory is fitted, a theory that optionally accounts for the frequency dependence of the hydrodynamic drag force and hydrodynamic interaction with a nearby cover slip, for effects of finite sampling frequency (aliasing), for effects of anti-aliasing filters in the data acquisition electronics, and for unintended "virtual" filtering caused by the position detection system. Each of these effects can be left out or included as the user prefers, with user-defined parameters. Several tests are applied to the experimental data during calibration to ensure that the data comply with the theory used for their interpretation: Independence of x- and y-coordinates, Hooke's law, exponential distribution of power spectral values, uncorrelated Gaussian scatter of residual values. Results are given with statistical errors and covariance matrix. Program summaryTitle of program: tweezercalib Catalogue identifier: ADTV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland. Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV Computer for

  8. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    PubMed

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  9. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    NASA Astrophysics Data System (ADS)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  10. The Relationship between Gender and Students' Attitude and Experience of Using a Mathematical Software Program (MATLAB)

    ERIC Educational Resources Information Center

    Ocak, Mehmet A.

    2006-01-01

    This correlation study examined the relationship between gender and the students' attitude and prior knowledge of using one of the mathematical software programs (MATLAB). Participants were selected from one community college, one state university and one private college. Students were volunteers from three Calculus I classrooms (one class from…

  11. Enhancing Student Writing and Computer Programming with LATEX and MATLAB in Multivariable Calculus

    ERIC Educational Resources Information Center

    Sullivan, Eric; Melvin, Timothy

    2016-01-01

    Written communication and computer programming are foundational components of an undergraduate degree in the mathematical sciences. All lower-division mathematics courses at our institution are paired with computer-based writing, coding, and problem-solving activities. In multivariable calculus we utilize MATLAB and LATEX to have students explore…

  12. [Application of the mixed programming with Labview and Matlab in biomedical signal analysis].

    PubMed

    Yu, Lu; Zhang, Yongde; Sha, Xianzheng

    2011-01-01

    This paper introduces the method of mixed programming with Labview and Matlab, and applies this method in a pulse wave pre-processing and feature detecting system. The method has been proved suitable, efficient and accurate, which has provided a new kind of approach for biomedical signal analysis.

  13. TC-Investigator: A Matlab Program to Explore Pseudosections

    NASA Astrophysics Data System (ADS)

    Pearce, Mark; Gazley, Michael; White, Alistair

    2014-05-01

    Forward modelling of bulk rock compositions to constrain pressures and temperatures of metamorphism based on mineral assemblage is a commonly used technique. The pseudosections produced contain a wealth of information about predicted mineral compositions and abundances that goes far beyond variations in mineral assemblage. A grid of these variations can be contoured using Gibbs free energy minimisation software (such as Theriak-Domino) or precise isopleths calculated for specific quantities in THERMOCALC. We have produced a new piece of software called TC-Investigator that amalgamates these approaches to provide a relatively quick and user friendly way to contour all compositional parameters and mineral modes across a THERMOCALC pseudosection. TC-Investigator takes the postscript pseudosection diagram and creates a grid of points at a user-specified resolution. THERMOCALC is then used to calculate the equilibrium mineral assemblage at each point using an initial starting guess provided by the user (this can be calculated during initial pseudosection calculation). Once all points have been tried, any that failed to calculate are re-tried using interpolated starting guess values from the surrounding points. This procedure is iterated until no more solutions are found. Any remaining unsolved points are then interpolated numerically from surrounding solutions to produce a fully quantified set of mineral modes and compositions. Following calculation, the dataset can be contoured and output as figures, output as a Matlab readable binary structure or selected compositions written to an ASCII text file. Compositional maps created by TC-Investigator have the power to inform the user about compositional variables that are not conventionally considered. The automated calculation method makes it easy to investigate all variables in one go. For example, in metapelitic rocks, garnet shows the variations in composition that are usually contoured, however, these couple to

  14. MT2DInvMatlab—A program in MATLAB and FORTRAN for two-dimensional magnetotelluric inversion

    NASA Astrophysics Data System (ADS)

    Lee, Seong Kon; Kim, Hee Joon; Song, Yoonho; Lee, Choon-Ki

    2009-08-01

    MT2DInvMatlab is an open-source MATLAB® software package for two-dimensional (2D) inversion of magnetotelluric (MT) data; it is written in mixed languages of MATLAB and FORTRAN. MT2DInvMatlab uses the finite element method (FEM) to compute 2D MT model responses, and smoothness-constrained least-squares inversion with a spatially variable regularization parameter algorithm to stabilize the inversion process and provide a high-resolution optimal earth model. It is also able to include terrain effects in inversion by incorporating topography into a forward model. This program runs under the MATLAB environment so that users can utilize the existing general interface of MATLAB, while some specific functions are written in FORTRAN 90 to speed up computation and reuse pre-existing FORTRAN code in the MATLAB environment with minimal modification. This program has been tested using synthetic models, including one with variable topography, and on field data. The results were assessed by comparing inverse models obtained with MT2DInvMatlab and with a non-linear conjugate gradient (NLCG) algorithm. In both tests the new inversion software reconstructs the subsurface resistivity structure very closely and provides an improvement in both resolution and stability.

  15. Design of a program in Matlab environment for gamma spectrum analysis of geological samples

    NASA Astrophysics Data System (ADS)

    Rojas, M.; Correa, R.

    2016-05-01

    In this work we present the analysis of gamma ray spectra Ammonites found in different places. One of the fossils was found near the city of Cusco (Perú) and the other in “Cajón del Maipo” in Santiago (Chile). Spectra were taken with a hyperpure germanium detector (HPGe) in an environment cooled with liquid nitrogen, with the technique of high-resolution gamma spectroscopy. A program for automatic detection and classifying of the samples was developed in Matlab. It program has the advantage of being able to make direct interventions or generalize it even more, or make it automate for specific spectra and make comparison between them. For example it can calibrate the spectrum automatically, only by giving the calibration spectrum, without the necessity of putting them. Finally, it also erases the external noise.

  16. How to get students to love (or not hate) MATLAB and programming

    NASA Astrophysics Data System (ADS)

    Reckinger, Shanon; Reckinger, Scott

    2014-11-01

    An effective programming course geared toward engineering students requires the utilization of modern teaching philosophies. A newly designed course that focuses on programming in MATLAB involves flipping the classroom and integrating various active teaching techniques. Vital aspects of the new course design include: lengthening in-class contact hours, Process-Oriented Guided Inquiry Learning (POGIL) method worksheets (self-guided instruction), student created video content posted on YouTube, clicker questions (used in class to practice reading and debugging code), programming exams that don't require computers, integrating oral exams into the classroom, fostering an environment for formal and informal peer learning, and designing in a broader theme to tie together assignments. However, possibly the most important piece to this programming course puzzle: the instructor needs to be able to find programming mistakes very fast and then lead individuals and groups through the steps to find their mistakes themselves. The effectiveness of the new course design is demonstrated through pre- and post- concept exam results and student evaluation feedback. Students reported that the course was challenging and required a lot of effort, but left largely positive feedback.

  17. Aerial image simulation for partial coherent system with programming development in MATLAB

    NASA Astrophysics Data System (ADS)

    Hasan, Md. Nazmul; Rahman, Md. Momtazur; Udoy, Ariful Banna

    2014-10-01

    Aerial image can be calculated by either Abbe's method or sum of coherent system decomposition (SOCS) method for partial coherent system. This paper introduces a programming with Matlab code that changes the analytical representation of Abbe's method to the matrix form, which has advantages for both Abbe's method and SOCS since matrix calculation is easier than double integration over object plane or pupil plane. First a singular matrix P is derived from a pupil function and effective light source in the spatial frequency domain. By applying Singular Value Decomposition (SVD) to the matrix P, eigenvalues and eigenfunctions are obtained. The aerial image can then be computed by the eigenvalues and eigenfunctions without calculation of Transmission Cross Coefficient (TCC). The aerial final image is almost identical as an original cross mask and the intensity distribution on image plane shows that it is almost uniform across the linewidth of the mask.

  18. 2-D Modeling of Energy-z Beam Dynamics Using the LiTrack Matlab Program

    SciTech Connect

    Cauley, S.K.; Woods, M.; /SLAC

    2005-12-15

    Short bunches and the bunch length distribution have important consequences for both the LCLS project at SLAC and the proposed ILC project. For both these projects, it is important to simulate what bunch length distributions are expected and then to perform actual measurements. The goal of the research is to determine the sensitivity of the bunch length distribution to accelerator phase and voltage. This then indicates the level of control and stability that is needed. In this project I simulated beamlines to find the rms bunch length in three different beam lines at SLAC, which are the test beam to End Station A (ILC-ESA) for the ILC studies, Linac Coherent Light Source (LCLS) and LCLS-ESA. To simulate the beamlines, I used the LiTrack program, which does a 2-dimensional tracking of an electron bunch's longitudinal (z) and the energy spread beam (E) parameters. In order to reduce the time of processing the information, I developed a small program to loop over adjustable machine parameters. LiTrack is a Matlab script and Matlab is also used for plotting and saving and loading files. The results show that the LCLS in Linac-A is the most sensitive when looking at the ratio of change in phase degree to rate of change. The results also show a noticeable difference between the LCLS and LCLS-ESA, which suggest that further testing should go into looking the Beam Switch Yard and End Station A to determine why the result of the LCLS and LCLS-ESA vary.

  19. A MATLAB program to calculate translational and rotational diffusion coefficients of a single particle

    NASA Astrophysics Data System (ADS)

    Charsooghi, Mohammad A.; Akhlaghi, Ehsan A.; Tavaddod, Sharareh; Khalesifard, H. R.

    2011-02-01

    We developed a graphical user interface, MATLAB based program to calculate the translational diffusion coefficients in three dimensions for a single diffusing particle, suspended inside a fluid. When the particles are not spherical, in addition to their translational motion also a rotational freedom is considered for them and in addition to the previous translational diffusion coefficients a planar rotational diffusion coefficient can be calculated in this program. Time averaging and ensemble averaging over the particle displacements are taken to calculate the mean square displacement variations in time and so the diffusion coefficients. To monitor the random motion of non-spherical particles a reference frame is used that the particle just have translational motion in it. We call it the body frame that is just like the particle rotates about the z-axis of the lab frame. Some statistical analysis, such as velocity autocorrelation function and histogram of displacements for the particle either in the lab or body frames, are available in the program. Program also calculates theoretical values of the diffusion coefficients for particles of some basic geometrical shapes; sphere, spheroid and cylinder, when other diffusion parameters like temperature and fluid viscosity coefficient can be adjusted. Program summaryProgram title: KOJA Catalogue identifier: AEHK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 021 No. of bytes in distributed program, including test data, etc.: 1 310 320 Distribution format: tar.gz Programming language: MatLab (MathWorks Inc.) version 7.6 or higher. Statistics Toolbox and Curve Fitting Toolbox required. Computer: Tested on windows and linux, but generally it would work on any

  20. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  1. Development of a low cost infrared spectrophotometer and a Matlab program to detect terrestrial and extraterrestrial water vapor

    NASA Astrophysics Data System (ADS)

    Raju, Lakshmi

    2014-03-01

    The objective of this project was to develop a low cost infrared spectrophotometer to measure terrestrial or extraterrestrial water vapor and to create a Matlab program to analyze the absorption data. Narrow bandwidth infrared filters of 940 nm and 1000 nm were used to differentially detect absorption due to vibrational frequency of water vapor. Light travelling through a collimating tube with varying humidity was allowed to pass through respective filters. The intensity of exiting light was measured using a silicon photodiode connected to a multimeter and a laptop with Matlab program. Absorption measured (decrease in voltage) using the 940nm filter was significantly higher with increasing humidity (p less than 0.05) demonstrating that the instrument can detect and relatively quantify water vapor. A Matlab program was written to comparatively graph absorption data. In conclusion, a novel, low cost infrared spectrophotometer was successfully created to detect water vapor and serves as a prototype to detect water on the moon. This instrument can also assist in teaching and learning spectrophotometry.

  2. MineSeis -- A MATLAB GUI program to calculate synthetic seismograms from a linear, multi-shot blast source model

    SciTech Connect

    Yang, X.

    1998-12-31

    Modeling ground motions from multi-shot, delay-fired mining blasts is important to the understanding of their source characteristics such as spectrum modulation. MineSeis is a MATLAB{reg_sign} (a computer language) Graphical User Interface (GUI) program developed for the effective modeling of these multi-shot mining explosions. The program provides a convenient and interactive tool for modeling studies. Multi-shot, delay-fired mining blasts are modeled as the time-delayed linear superposition of identical single shot sources in the program. These single shots are in turn modeled as the combination of an isotropic explosion source and a spall source. Mueller and Murphy`s (1971) model for underground nuclear explosions is used as the explosion source model. A modification of Anandakrishnan et al.`s (1997) spall model is developed as the spall source model. Delays both due to the delay-firing and due to the single-shot location differences are taken into account in calculating the time delays of the superposition. Both synthetic and observed single-shot seismograms can be used to construct the superpositions. The program uses MATLAB GUI for input and output to facilitate user interaction with the program. With user provided source and path parameters, the program calculates and displays the source time functions, the single shot synthetic seismograms and the superimposed synthetic seismograms. In addition, the program provides tools so that the user can manipulate the results, such as filtering, zooming and creating hard copies.

  3. MATLAB-Based Program for Teaching Autocorrelation Function and Noise Concepts

    ERIC Educational Resources Information Center

    Jovanovic Dolecek, G.

    2012-01-01

    An attractive MATLAB-based tool for teaching the basics of autocorrelation function and noise concepts is presented in this paper. This tool enhances traditional in-classroom lecturing. The demonstrations of the tool described here highlight the description of the autocorrelation function (ACF) in a general case for wide-sense stationary (WSS)…

  4. FeynDyn: A MATLAB program for fast numerical Feynman integral calculations for open quantum system dynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Dattani, Nikesh S.

    2013-12-01

    This MATLAB program calculates the dynamics of the reduced density matrix of an open quantum system modeled either by the Feynman-Vernon model or the Caldeira-Leggett model. The user gives the program a Hamiltonian matrix that describes the open quantum system as if it were in isolation, a matrix of the same size that describes how that system couples to its environment, and a spectral distribution function and temperature describing the environment’s influence on it, in addition to the open quantum system’s initial density matrix and a grid of times. With this, the program returns the reduced density matrix of the open quantum system at all moments specified by that grid of times (or just the last moment specified by the grid of times if the user makes this choice). This overall calculation can be divided into two stages: the setup of the Feynman integral, and the actual calculation of the Feynman integral for time propagation of the density matrix. When this program calculates this propagation on a multi-core CPU, it is this propagation that is usually the rate-limiting step of the calculation, but when it is calculated on a GPU, the propagation is calculated so quickly that the setup of the Feynman integral can actually become the rate-limiting step. The overhead of transferring information from the CPU to the GPU and back seems to have a negligible effect on the overall runtime of the program. When the required information cannot fit on the GPU, the user can choose to run the entire program on a CPU. Catalogue identifier: AEPX_v1_0. Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPX_v1_0.html. Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html. No. of lines in distributed program, including test data, etc.: 703. No. of bytes in distributed program, including test data, etc.: 11026. Distribution format: tar.gz. Programming

  5. Calculus Demonstrations Using MATLAB

    ERIC Educational Resources Information Center

    Dunn, Peter K.; Harman, Chris

    2002-01-01

    The note discusses ways in which technology can be used in the calculus learning process. In particular, five MATLAB programs are detailed for use by instructors or students that demonstrate important concepts in introductory calculus: Newton's method, differentiation and integration. Two of the programs are animated. The programs and the…

  6. An introduction to MATLAB.

    PubMed

    Sobie, Eric A

    2011-09-13

    This two-part lecture introduces students to the scientific computing language MATLAB. Prior computer programming experience is not required. The lectures present basic concepts of computer programming logic that tend to cause difficulties for beginners in addition to concepts that relate specifically to the MATLAB language syntax. The lectures begin with a discussion of vectors, matrices, and arrays. Because many types of biological data, such as fluorescence images and DNA microarrays, are stored as two-dimensional objects, processing these data is a form of array manipulation, and MATLAB is especially adept at handling such array objects. The students are introduced to basic commands in MATLAB, as well as built-in functions that provide useful shortcuts. The second lecture focuses on the differences between MATLAB scripts and MATLAB functions and describes when one method of programming organization might be preferable to the other. The principles are illustrated through the analysis of experimental data, specifically measurements of intracellular calcium concentration in live cells obtained using confocal microscopy.

  7. ISOPAQ, a MATLAB program for stratigraphic and isopach mapping: example application to the French Bajocian (Jurassic) sediments

    NASA Astrophysics Data System (ADS)

    Monnet, Claude; Bouchet, Stéphane; Thiry-Bastien, Philippe

    2003-11-01

    The three-dimensional reconstruction of basin sediments has become a major topic in earth sciences and is now a necessary step for modeling and understanding the depositional context of sediments. Because data are generally scattered, the construction of any irregular, continuous surface involves the interpolation of a large number of points over a regular grid. However, interpolation is a highly technical specialty that is still somewhat of a black art for most people. The lack of multi-platform contouring software that is easy to use, fast and automatic, without numerous abstruse parameters, motivated the programming of a software, called ISOPAQ. This program is an interactive desktop tool for spatial analysis, interpolation and display (location, contour and surface mapping) of earth science data, especially stratigraphic data. It handles four-dimensional data sets, where the dimensions are usually longitude, latitude, thickness and time, stored in a single text file. The program uses functions written for the MATLAB ® software. Data are managed by the means of a user-friendly graphical interface, which allows the user to interpolate and generate maps for stratigraphic analyses. This program can process and compare several interpolation methods (nearest neighbor, linear and cubic triangulations, inverse distance and surface splines) and some stratigraphic treatments, such as the decompaction of sediments. Moreover, the window interface helps the user to easily change some parameters like coordinates, grid cell size, and equidistance of contour lines and scale between files. Primarily developed for non-specialists of interpolation thanks to the graphical user interface, practitioners can also easily append the program with their own functions, since it is written in MATLAB open language. As an example, the program is applied here to the Bajocian stratigraphic sequences of eastern France.

  8. Epidemiologic programs for computers and calculators. Simple algorithms for the representation of deterministic and stochastic versions of the Reed-Frost epidemic model using a programmable calculator.

    PubMed

    Franco, E L; Simons, A R

    1986-05-01

    Two programs are described for the emulation of the dynamics of Reed-Frost progressive epidemics in a handheld programmable calculator (HP-41C series). The programs provide a complete record of cases, susceptibles, and immunes at each epidemic period using either the deterministic formulation or the trough analogue of the mechanical model for the stochastic version. Both programs can compute epidemics that include a constant rate of influx or outflux of susceptibles and single or double infectivity time periods.

  9. FOLD PROFILER: A MATLAB ®—based program for fold shape classification

    NASA Astrophysics Data System (ADS)

    Lisle, R. J.; Fernández Martínez, J. L.; Bobillo-Ares, N.; Menéndez, O.; Aller, J.; Bastida, F.

    2006-02-01

    FOLD PROFILER is a MATLAB code for classifying the shapes of profiles of folded surfaces. The classification is based on the comparison of the natural fold profile with curves representing mathematical functions. The user is offered a choice of four methods, each based on a different type of function: cubic Bezier curves, conic sections, power functions and superellipses. The comparison is carried out by the visual matching of the fold profile displayed on-screen from an imported digital image and computed theoretical curves which are superimposed on the image of the fold. To improve the fit with the real fold shape, the parameters of the theoretical curves are changed by simple mouse actions. The parameters of the mathematical function that best fits the real folds are used to classify the fold shape. FOLD PROFILER allows the rapid implementation of four existing methods for fold shape analysis. The attractiveness of this analytical tool lies in the way it gives an instant visual appreciation of the effect of changing the parameters that are used to classify fold geometry.

  10. Use of conditional rule structure to automate clinical decision support: a comparison of artificial intelligence and deterministic programming techniques

    SciTech Connect

    Friedman, R.H.; Frank, A.D.

    1983-08-01

    A rule-based computer system was developed to perform clinical decision-making support within a medical information system, oncology practice, and clinical research. This rule-based system, which has been programmed using deterministic rules, possesses features of generalizability, modularity of structure, convenience in rule acquisition, explanability, and utility for patient care and teaching, features which have been identified as advantages of artificial intelligence (AI) rule-based systems. Formal rules are primarily represented as conditional statements; common conditions and actions are stored in system dictionaries so that they can be recalled at any time to form new decision rules. Important similarities and differences exist in the structure of this system and clinical computer systems utilizing artificial intelligence (AI) production rule techniques. The non-AI rule-based system posesses advantages in cost and ease of implementation. The degree to which significant medical decision problems can be solved by this technique remains uncertain as does whether the more complex AI methodologies will be required. 15 references.

  11. Deterministic Entangled Nanosource

    DTIC Science & Technology

    2008-08-01

    currently valid OMB control number . PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE...Final Report 3. DATES COVERED (From - To) Sep 2005 – Sep 2008 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455...Deterministic Entangled Nanosource 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Khitrova, Galina 5e. TASK

  12. Deterministic Ising dynamics

    SciTech Connect

    Creutz, M.

    1986-03-01

    A deterministic cellular automation rule is presented which simulates the Ising model. On each cell in addition to an Ising spin is a space-time parity bit and a variable playing the role of a momentum conjugate to the spin. The procedure permits study of nonequilibrium phenomena, heat flow, mixing, and time correlations. The algorithm can make full use of multispin coding, thus permitting fast programs involving parallel processing on serial machines.

  13. VOLTINT: A Matlab ®-based program for semi-automated processing of geochemical data acquired by voltammetry

    NASA Astrophysics Data System (ADS)

    Bristow, Gwendolyn; Taillefert, Martial

    2008-02-01

    Recent progress has resulted in the development of advanced techniques to acquire geochemical information in situ in aquatic systems. Among these techniques, voltammetry has generated significant interest for its ability to detect several important redox-sensitive chemical species in a fast, reliable, and automated manner. Many research groups worldwide have now adopted these techniques for geochemical measurements in various marine and freshwater systems, including water column, sediment, microbial mat, and groundwater, with a high spatial and temporal resolution. Unfortunately, the ability to conduct multiple measurements with great spatial and temporal resolutions generates large data sets that are difficult to integrate manually. We report a new computer program, voltammetric integration software (VOLTINT), that can integrate large voltammetric data sets semi-automatically. This program implemented in Matlab ® is based on a graphical user interface to visualize and identify voltammetric signals. The program differentiates between voltammetric techniques and derives or integrates voltammetric signals to produce output data files containing the redox potentials, current intensities, and, when appropriate, peak surface areas of each electrochemical species that can be detected. VOLTINT was developed with the intention of integrating voltammetric data obtained with potentiostats from a specific company Analytical Instrument Systems, Inc. (AIS). However, the scripts can be easily altered to process any ASCII file containing voltammetric data. The details of the program are presented, and examples provided along with recommendations regarding the analysis of voltammetric data in the context of this program. VOLTINT is available free of charge to anyone who is interested in integrating multiple voltammetric data files in a fast and reliable manner.

  14. Semi-deterministic reasoning

    SciTech Connect

    Chengjiang Mao

    1996-12-31

    In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.

  15. E-Learning Technologies: Employing Matlab Web Server to Facilitate the Education of Mathematical Programming

    ERIC Educational Resources Information Center

    Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.

    2006-01-01

    This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…

  16. The GMT/MATLAB Toolbox

    NASA Astrophysics Data System (ADS)

    Wessel, Paul; Luis, Joaquim F.

    2017-02-01

    The GMT/MATLAB toolbox is a basic interface between MATLAB® (or Octave) and GMT, the Generic Mapping Tools, which allows MATLAB users full access to all GMT modules. Data may be passed between the two programs using intermediate MATLAB structures that organize the metadata needed; these are produced when GMT modules are run. In addition, standard MATLAB matrix data can be used directly as input to GMT modules. The toolbox improves interoperability between two widely used tools in the geosciences and extends the capability of both tools: GMT gains access to the powerful computational capabilities of MATLAB while the latter gains the ability to access specialized gridding algorithms and can produce publication-quality PostScript-based illustrations. The toolbox is available on all platforms and may be downloaded from the GMT website.

  17. rFRET: A comprehensive, Matlab-based program for analyzing intensity-based ratiometric microscopic FRET experiments.

    PubMed

    Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János

    2016-04-01

    Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry.

  18. Deterministic Entangled Nanosource

    DTIC Science & Technology

    2008-08-01

    control number PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 01-09-2008 2. REPORT TYPE Final Report 3...DATES COVERED (From - To) Sep 2005 - Sep 200? 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER FA9550-05-1-0455 5b. GRANT NUMBER Deterministic...Entangled Nanosource 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER Khitrova, Galina 5f. WORK UNIT NUMBER 7. PERFORMING

  19. Education of optics with Matlab

    NASA Astrophysics Data System (ADS)

    Miks, Antonin; Novak, Jiri

    2003-11-01

    In our work there is shown one of possible approaches to education of various parts of optics with a mathematical system MATLAB. The work is focused mainly on education of interference and diffraction of light and the diffraction theory of optical imaging. In our laboratories students can simply perform a computer simulation of various problems, which they can meet in practice, e.g. two-beam interferometry, imaging in coherent, partially coherent or incoherent light, diffraction from gratings of different types, etc. The system Matlab can be also used for simulating problems in holography and holographic interferometry of static and dynamic events. Students can further simulate transforming of optical beams through a simple lens or a system of lenses by means of ray tracing. For every described part of optics we have the software programmed in the Matlab system. Matlab seems to be a very good tool for numerical modelling of properties of various optical systems and for teaching optics.

  20. Matlab Based LOCO

    SciTech Connect

    Portmann, Greg; Safranek, James; Huang, Xiaobiao; /SLAC

    2011-10-18

    The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRAN code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.

  1. Test Generator for MATLAB Simulations

    NASA Technical Reports Server (NTRS)

    Henry, Joel

    2011-01-01

    MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.

  2. Channel Access Client Toolbox for Matlab

    SciTech Connect

    Terebilo, Andrei

    2002-08-07

    This paper reports on MATLAB Channel Access (MCA) Toolbox--MATLAB [1] interface to EPICS Channel Access (CA) client library. We are developing the toolbox for SPEAR3 accelerator controls, but it is of general use for accelerator and experimental physics applications programming. It is packaged as a MATLAB toolbox to allow easy development of complex CA client applications entirely in MATLAB. The benefits include: the ability to calculate and display parameters that use EPICS process variables as inputs, availability of MATLAB graphics tools for user interface design, and integration with the MATLAB-based accelerator modeling software--Accelerator Toolbox [2-4]. Another purpose of this paper is to propose a feasible path to a synergy between accelerator control systems and accelerator simulation codes, the idea known as on-line accelerator model.

  3. MATLAB-based program for optimization of quantum cascade laser active region parameters and calculation of output characteristics in magnetic field

    NASA Astrophysics Data System (ADS)

    Smiljanić, J.; Žeželj, M.; Milanović, V.; Radovanović, J.; Stanković, I.

    2014-03-01

    A strong magnetic field applied along the growth direction of a quantum cascade laser (QCL) active region gives rise to a spectrum of discrete energy states, the Landau levels. By combining quantum engineering of a QCL with a static magnetic field, we can selectively inhibit/enhance non-radiative electron relaxation process between the relevant Landau levels of a triple quantum well and realize a tunable surface emitting device. An efficient numerical algorithm implementation is presented of optimization of GaAs/AlGaAs QCL region parameters and calculation of output properties in the magnetic field. Both theoretical analysis and MATLAB implementation are given for LO-phonon and interface roughness scattering mechanisms on the operation of QCL. At elevated temperatures, electrons in the relevant laser states absorb/emit more LO-phonons which results in reduction of the optical gain. The decrease in the optical gain is moderated by the occurrence of interface roughness scattering, which remains unchanged with increasing temperature. Using the calculated scattering rates as input data, rate equations can be solved and population inversion and the optical gain obtained. Incorporation of the interface roughness scattering mechanism into the model did not create new resonant peaks of the optical gain. However, it resulted in shifting the existing peaks positions and overall reduction of the optical gain. Catalogue identifier: AERL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERL_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 37763 No. of bytes in distributed program, including test data, etc.: 2757956 Distribution format: tar.gz Programming language: MATLAB. Computer: Any capable of running MATLAB version R2010a or higher. Operating system: Any platform

  4. Deterministic Walks with Choice

    SciTech Connect

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  5. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    NASA Astrophysics Data System (ADS)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program

  6. BGPR_Reconstruct: A MATLAB ® ray-tracing program for nonlinear inversion of first arrival travel time data from zero-offset borehole radar

    NASA Astrophysics Data System (ADS)

    Rucker, Dale F.; Ferré, Ty P. A.

    2004-08-01

    A MATLAB program was developed to invert first arrival travel time picks from zero offset profiling borehole ground penetrating radar traces to obtain the electromagnetic wave propagation velocities in soil. Zero-offset profiling refers to a mode of operation wherein the centers of the bistatic antennae being lowered to the same depth below ground for each measurement. The inversion uses a simulated annealing optimization routine, whereby the model attempts to reduce the root mean square error between the measured and modeled travel time by perturbing the velocity in a ray tracing routine. Measurement uncertainty is incorporated through the presentation of the ensemble mean and standard deviation from the results of a Monte Carlo simulation. The program features a pre-processor to modify or delete travel time information from the profile before inversion and post-processing through presentation of the ensemble statistics of the water contents inferred from the velocity profile. The program includes a novel application of a graphical user interface to animate the velocity fitting routine.

  7. MATLAB Tensor Toolbox

    SciTech Connect

    Kolda, Tamara G.; Bader, Brett W.

    2006-08-03

    This software provides a collection of MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. We have also added support for sparse tensor, tensors in Kruskal or Tucker format, and tensors stored as matrices (both dense and sparse).

  8. MineSeis -- A MATLAB{reg_sign} GUI program to calculate synthetic seismograms from a linear, multi-shot blast source model

    SciTech Connect

    Yang, X.

    1998-04-01

    Large scale (up to 5 kt) chemical blasts are routinely conducted by mining and quarry industries around the world to remove overburden or to fragment rocks. Because of their ability to trigger the future International Monitoring System (IMS) of the Comprehensive Test Ban Treaty (CTBT), these blasts are monitored and studied by verification seismologists for the purpose of discriminating them from possible clandestine nuclear tests. One important component of these studies is the modeling of ground motions from these blasts with theoretical and empirical source models. The modeling exercises provide physical bases to regional discriminants and help to explain the observed signal characteristics. The program MineSeis has been developed to implement the synthetic seismogram modeling of multi-shot blast sources with the linear superposition of single shot sources. Single shot sources used in the modeling are the spherical explosion plus spall model mentioned here. Mueller and Murphy`s (1971) model is used as the spherical explosion model. A modification of Anandakrishnan et al.`s (1997) spall model is developed for the spall component. The program is implemented with the MATLAB{reg_sign} Graphical User Interface (GUI), providing the user with easy, interactive control of the calculation.

  9. Deterministic Execution of Ptides Programs

    DTIC Science & Technology

    2013-05-15

    are developed in Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework... Ptolemy , a design and simulation environment for heteroge- neous systems. This framework also contains a code generation framework which is leveraged to...generation is implemented in Ptolemy II, [4], an academic tool for designing and experimenting with heterogeneous system models. The first section of

  10. Parallelizing AT with MatlabMPI

    SciTech Connect

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  11. A deterministic discrete ordinates transport proxy application

    SciTech Connect

    2014-06-03

    Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.

  12. Generalized Deterministic Traffic Rules

    NASA Astrophysics Data System (ADS)

    Fuks, Henryk; Boccara, Nino

    We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents a "degree of aggressiveness" in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: "conservative" driving with high speed limit and "aggressive" driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.

  13. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  14. An Accelerator Control Middle Layer Using MATLAB

    SciTech Connect

    Portmann, Gregory J.; Corbett, Jeff; Terebilo, Andrei

    2005-05-15

    Matlab is an interpretive programming language originally developed for convenient use with the LINPACK and EISPACK libraries. Matlab is appealing for accelerator physics because it is matrix-oriented, provides an active workspace for system variables, powerful graphics capabilities, built-in math libraries, and platform independence. A number of accelerator software toolboxes have been written in Matlab -- the Accelerator Toolbox (AT) for model-based machine simulations, LOCO for on-line model calibration, and Matlab Channel Access (MCA) to connect with EPICS. The function of the MATLAB ''MiddleLayer'' is to provide a scripting language for machine simulations and on-line control, including non-EPICS based control systems. The MiddleLayer has simplified and streamlined development of high-level applications including configuration control, energy ramp, orbit correction, photon beam steering, ID compensation, beam-based alignment, tune correction and response matrix measurement. The database-driven Middle Layer software is largely machine-independent and easy to port. Six accelerators presently use the software package with more scheduled to come on line soon.

  15. Deterministic geologic processes and stochastic modeling

    SciTech Connect

    Rautman, C.A.; Flint, A.L.

    1991-12-31

    Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues.

  16. MATLAB-BASED LOCO

    SciTech Connect

    Safranek, James

    2002-08-23

    The storage ring linear optics debugging code LOCO (Linear Optics from Closed Orbits)[1] has been rewritten in MATLAB and linked to the accelerator modeling code AT [2]. LOCO uses the measured orbit response matrix to determine normal and skew quadrupole gradients. A MATLAB GUI provides a greatly improved user interface with graphical display of the fitting results. The option of including the shift in orbit with rf-frequency in the orbit response matrix has been added so that the model is adjusted to match the measured dispersion. This facilitates control of the horizontal dispersion, which is important for achieving small horizontal emittance. Also included are error bar calculation, outlier data rejection, accommodation of single-view BPMs (beam position monitors), and the option of including coupling in the fit. The code was written to allow the flexibility of linking it to other accelerator modeling codes.

  17. An Accelerator Control Middle Layer Using MATLAB

    SciTech Connect

    Portmann, Gregory J.; Corbett, Jeff; Terebilo, Andrei

    2005-03-15

    Matlab is a matrix manipulation language originally developed to be a convenient language for using the LINPACK and EISPACK libraries. What makes Matlab so appealing for accelerator physics is the combination of a matrix oriented programming language, an active workspace for system variables, powerful graphics capability, built-in math libraries, and platform independence. A number of software toolboxes for accelerators have been written in Matlab--the Accelerator Toolbox (AT) for machine simulations, LOCO for accelerator calibration, Matlab Channel Access Toolbox (MCA) for EPICS connections, and the Middle Layer. This paper will describe the ''middle layer'' software toolbox that resides between the high-level control applications and the low-level accelerator control system. This software was a collaborative effort between ALS (LBNL) and SPEAR3 (SSRL) but easily ports to other machines. Five accelerators presently use this software. The high-level Middle Layer functionality includes energy ramp, configuration control (save/restore), global orbit correction, local photon beam steering, insertion device compensation, beam-based alignment, tune correction, response matrix measurement, and script-based programs for machine physics studies.

  18. MATLAB and graphical user interfaces: tools for experimental management.

    PubMed

    Harley, E M; Loftus, G R

    2000-05-01

    MATLAB is a convenient platform for the development and management of psychological experiments because of its easy-to-use programming language, sophisticated graphics features, and statistics and optimization tools. Through implementation of the Brainard-Pelli Psychophysics Toolbox, the MATLAB user gains close temporal and spatial control over the CRT, while retaining the simplicity of an interpreted language conductive to rapid program development. MATLAB's abilities can be further utilized through easily programmable graphical user interfaces (GUIs). We illustrate how a GUI can serve as a powerful and intuitive tool for organizing and controlling all aspects of a psychological experiment, including design, data collection, data analysis, and theory fitting.

  19. Self-stabilizing Deterministic Gathering

    NASA Astrophysics Data System (ADS)

    Dieudonné, Yoann; Petit, Franck

    In this paper, we investigate the possibility to deterministically solve the gathering problem (GP) with weak robots (anonymous, autonomous, disoriented, oblivious, deaf, and dumb). We introduce strong multiplicity detection as the ability for the robots to detect the exact number of robots located at a given position. We show that with strong multiplicity detection, there exists a deterministic self-stabilizing algorithm solving GP for n robots if, and only if, n is odd.

  20. Matlab Cluster Ensemble Toolbox

    SciTech Connect

    Sapio, Vincent De; Kegelmeyer, Philip

    2009-04-27

    This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. With regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.

  1. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  2. Mixed deterministic and probabilistic networks.

    PubMed

    Mateescu, Robert; Dechter, Rina

    2008-11-01

    The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model.

  3. Mixed deterministic and probabilistic networks

    PubMed Central

    Dechter, Rina

    2010-01-01

    The paper introduces mixed networks, a new graphical model framework for expressing and reasoning with probabilistic and deterministic information. The motivation to develop mixed networks stems from the desire to fully exploit the deterministic information (constraints) that is often present in graphical models. Several concepts and algorithms specific to belief networks and constraint networks are combined, achieving computational efficiency, semantic coherence and user-interface convenience. We define the semantics and graphical representation of mixed networks, and discuss the two main types of algorithms for processing them: inference-based and search-based. A preliminary experimental evaluation shows the benefits of the new model. PMID:20981243

  4. Fluid turbulence - Deterministic or statistical

    NASA Astrophysics Data System (ADS)

    Cheng, Sin-I.

    The deterministic view of turbulence suggests that the classical theory of fluid turbulence may be treating the wrong entity. The paper explores the physical implications of such an abstract mathematical result, and provides a constructive computational demonstration of the deterministic and the wave nature of fluid turbulence. The associated pressure disturbance for restoring solenoidal velocity is the primary agent, and its reflection from solid surface(s) the dominant mechanism of turbulence production. Statistical properties and their modeling must address to the statistics of the uncertainties of initial boundary data of the ensemble.

  5. Why did maternal mortality decline in Matlab?

    PubMed

    Maine, D; Akalin, M Z; Chakraborty, J; de Francisco, A; Strong, M

    1996-01-01

    In 1991, an article on the Maternity Care Program in Matlab, Bangladesh, reported a substantial decline in direct obstetric deaths in the intervention area, but not in the control area. The decline was attributed primarily to the posting of midwives at the village level. In this article, data are presented from the same period and area on a variety of intermediate events. They indicate that the decline in deaths was probably due to the combined efforts of community midwives and the physicians at the Matlab maternity clinic. Their ability to refer patients to higher levels of care was important. The data further indicate that the decline in deaths depended upon the functioning of the government hospital in Chandpur, where cesarean sections and blood transfusions were available. Midwives might also have made a special contribution by providing early termination of pregnancy, which is legal in Bangladesh.

  6. Parallel Matlab: The Next Generation

    DTIC Science & Technology

    2007-11-02

    cluster of workstations. 1.E+04 1.E+05 1.E+06 1.E+07 2K 8K 32K 128K 512K 2M 8M MatlabMPI pMatlab Message Size (Bytes) B a n d w id th ( B y te s /s e... cluster . pMat- lab equals underlying MatlabMPI performance at large message sizes. Primary difference is latency (70 vs. 35 milliseconds). 2 Slide-1...duals) • Lincoln Hyperspectral Imaging (~3 on 3 cpus) • MIT LCS Beowulf (11 Gflops on 9 duals) • MIT AI Lab Machine Vision • OSU EM Simulations • ARL SAR

  7. GeoTemp™ 1.0: A MATLAB-based program for the processing, interpretation and modelling of geological formation temperature measurements

    NASA Astrophysics Data System (ADS)

    Ricard, Ludovic P.; Chanu, Jean-Baptiste

    2013-08-01

    The evaluation of potential and resources during geothermal exploration requires accurate and consistent temperature characterization and modelling of the sub-surface. Existing interpretation and modelling approaches of 1D temperature measurements are mainly focusing on vertical heat conduction with only few approaches that deals with advective heat transport. Thermal regimes are strongly correlated to rock and fluid properties. Currently, no consensus exists for the identification of the thermal regime and the analysis of such dataset. We developed a new framework allowing the identification of thermal regimes by rock formations, the analysis and modelling of wireline logging and discrete temperature measurements by taking into account the geological, geophysical and petrophysics data. This framework has been implemented in the GeoTemp software package that allows the complete thermal characterization and modelling at the formation scale and that provides a set of standard tools for the processing wireline and discrete temperature data. GeoTempTM operates via a user friendly graphical interface written in Matlab that allows semi-automatic calculation, display and export of the results. Output results can be exported as Microsoft Excel spreadsheets or vector graphics of publication quality. GeoTemp™ is illustrated here with an example geothermal application from Western Australia and can be used for academic, teaching and professional purposes.

  8. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  9. MTCLAB: A MATLAB ®-based program for traveltime quality analysis and pre-inversion velocity tuning in 2D transmission tomography

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, J. L.; Fernández-Alvarez, J. P.; Pedruelo-González, L. M.

    2008-03-01

    A MATLAB ®-based computer code that analyses the traveltime distribution and performs quality analysis at the pre-inversion stage for 2D transmission experiments is presented. The core tools of this approach are the so-called mean traveltime curves. For any general recording geometry, the user may select any pair of subsets of contiguous sources and receivers. The portion of the domain swept by the implied rays defines a zone of analysis, and for each source (receiver) the outcoming (incoming) ray fan is named a source (receiver) gather. The empirical mean traveltime curves are constructed, for each zone, by assigning the average and the standard deviation of the traveltimes in the gathers to the positions of the sources (receivers). The theoretical expressions assume isotropic homogeneous velocity inside each zone. The empirical counterparts use the observed traveltimes and make no assumptions. Isotropic velocity in each zone is inferred by least-squares fitting of the empirical mean traveltime curves. The user may refine the analysis considering different zones (multi-zone analysis). Initially the whole domain is modelled as a single zone. The procedure compares empirical versus theoretical curves. In addition, residuals can be plotted using source-receiver positions as plane coordinates. The results are used to unravel the possible presence of anomalous gathers, heterogeneities, anisotropies, etc. Depending on the kind of anomalies, velocity estimation and mean time residuals are different in the source and receiver gather curves. This software helps to grasp a better understanding of the data variability before the inversion and provides to the geophysicist an approximate zonal isotropic model and a range of velocity variation that can be used in the inverse problem as a priori information (regularization term). Its use is described through tutorial examples. A guided user interface leads the user through the algorithm steps.

  10. Analysis of FBC deterministic chaos

    SciTech Connect

    Daw, C.S.

    1996-06-01

    It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.

  11. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  12. Documentation generator application for MatLab source codes

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-06-01

    The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a realization of an application for documenting MatLab source codes. There are presented own novel solution based on Doxygen program which is available on the free license, with accessible source code. The used supporting tools for parser building were Bison and Flex. There are presented the practical results of the documentation generator. The program was applied for exemplary MatLab codes. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part two which describes the MatLab application. MatLab is used for description of the measured phenomena.

  13. Coded Modulation in C and MATLAB

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon; Andrews, Kenneth S.

    2011-01-01

    This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.

  14. Using Matlab to generate families of similar Attneave shapes.

    PubMed

    Collin, Charles A; McMullen, Patricia A

    2002-02-01

    We present a program for Matlab that quickly generates Attneave-style random polygons and families of similar polygons. The function allows a great deal of user control over various aspects of the shape generation process. It also has the ability to detect and eliminate shapes that do not match a variety of user-entered parameters regarding the lengths of the shapes' sides, vertex angles, and topological form. The function eliminates the time-consuming task of generating such shapes by hand and should allow their broader use in behavioral research. The Matlab script function can be downloaded at www.dal.ca/~mcmullen/downloads.html.

  15. Survivability of Deterministic Dynamical Systems

    PubMed Central

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-01-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955

  16. Deterministic weak localization in periodic structures.

    PubMed

    Tian, C; Larkin, A

    2005-12-09

    In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals.

  17. Deterministic transfer function for transionospheric propagation

    NASA Astrophysics Data System (ADS)

    Roussel-Dupre, R.; Argo, P.

    Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25 - 175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = omega(sub pe)(exp 2)/(omega)(exp 2) where X is assumed to be small compared to one, (omega)(sub pe) is the peak plasma frequency of the ionosphere, and omega is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to, venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.

  18. Deterministic quantum teleportation with atoms.

    PubMed

    Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R

    2004-06-17

    Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.

  19. Deterministic patterns in cell motility

    NASA Astrophysics Data System (ADS)

    Lavi, Ido; Piel, Matthieu; Lennon-Duménil, Ana-Maria; Voituriez, Raphaël; Gov, Nir S.

    2016-12-01

    Cell migration paths are generally described as random walks, associated with both intrinsic and extrinsic noise. However, complex cell locomotion is not merely related to such fluctuations, but is often determined by the underlying machinery. Cell motility is driven mechanically by actin and myosin, two molecular components that generate contractile forces. Other cell functions make use of the same components and, therefore, will compete with the migratory apparatus. Here, we propose a physical model of such a competitive system, namely dendritic cells whose antigen capture function and migratory ability are coupled by myosin II. The model predicts that this coupling gives rise to a dynamic instability, whereby cells switch from persistent migration to unidirectional self-oscillation, through a Hopf bifurcation. Cells can then switch to periodic polarity reversals through a homoclinic bifurcation. These predicted dynamic regimes are characterized by robust features that we identify through in vitro trajectories of dendritic cells over long timescales and distances. We expect that competition for limited resources in other migrating cell types can lead to similar deterministic migration modes.

  20. Acoustic Propagation Modeling Using MATLAB

    DTIC Science & Technology

    1993-09-01

    M1 Oatoq~wv.$~e 204.*’liqi.VA22202-43. andto %be 0##cejf~d q94o’.et~e *Ad6.aet. Vawe’-ok Aedwg1enPr.o,KtO04i4IS8I. .,a,..qto. DC 2010 ) 1. AGENCY USE...media," in Acoustical Imaging, Volume 14, (A, Berkhout , J. Ridder, and L. van der Wal, eds.), pp. 521-531, New York: Plenum Press, 1985. (16] MATLAB

  1. OXlearn: a new MATLAB-based simulation tool for connectionist models.

    PubMed

    Ruh, Nicolas; Westermann, Gert

    2009-11-01

    OXlearn is a free, platform-independent MATLAB toolbox in which standard connectionist neural network models can be set up, run, and analyzed by means of a user-friendly graphical interface. Due to its seamless integration with the MATLAB programming environment, the inner workings of the simulation tool can be easily inspected and/or extended using native MATLAB commands or components. This combination of usability, transparency, and extendability makes OXlearn an efficient tool for the implementation of basic research projects or the prototyping of more complex research endeavors, as well as for teaching. Both the MATLAB toolbox and a compiled version that does not require access to MATLAB can be downloaded from http://psych.brookes.ac.uk/oxlearn/.

  2. Universality classes for deterministic surface growth

    NASA Technical Reports Server (NTRS)

    Krug, J.; Spohn, H.

    1988-01-01

    A scaling theory for the generalized deterministic Kardar-Parisi-Zhang (1986) equation with beta greater than 1, is developed to study the growth of a surface through deterministic local rules. A one-dimensional surface model corresponding to beta = 1 is presented and solved exactly. The model can be studied as a limiting case of ballistic deposition, or as the deterministic limit of the Eden (1961) model. The scaling exponents, the correlation functions, and the skewness of the surface are determined. The results are compared with those of Burgers' (1974) equation for the case of beta = 2.

  3. Connecting deterministic and stochastic metapopulation models.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2015-12-01

    In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given.

  4. On the secure obfuscation of deterministic finite automata.

    SciTech Connect

    Anderson, William Erik

    2008-06-01

    In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.

  5. MATLAB-Based VHDL Development Environment

    SciTech Connect

    Katko, K. K.; Robinson, S. H.

    2002-01-01

    The Reconfigurable Computing program at Los Alamos National Laboratory (LANL) required synthesizable VHDL Fast Fourier Transform (FFT) designs that could be quickly implemented into FPGA-based high speed Digital Signal Processing architectures. Several different FFTs were needed for the different systems. As a result, the MATLAB-Based VHDL Development Environment was developed so that with a small amount of work and forethought, arbitrarily sized FFTs with different bit-width parameters could be produced quickly from one VHDL generating algorithm. The result is highly readable VHDL that can be modified quickly via the generating function to adapt to new algorithmic requirements. Several additional capabilities are integrated into the development environment. These capabilities include a bit-true parameterized mathematical model, fixed-point design validation, test vector generation, VHDL design verification, and chip resource use estimation. LANL needed the flexibility to build a wide variety of FFTs with a quick turn around time. It was important to have an effective way of trading off size, speed and precision. The FFTs also needed to be efficiently implemented into our existing FPGA-based architecture. Reconfigurable computing systems at LANL have been designed to accept two or four inputs on each clock. This allows the data processing rate to be reduced to a more manageable speed. This approach, however, limits us from using existing FFT cores. A MATLAB-Based VHDL Development Environment (MBVDE) was created in response to our FFT needs. MBVDE provides more flexibility than is available with VHDL. The technique allows new designs to be implemented and verified quickly. In addition, analysis tools are incorporated to evaluate trade-offs. MBVDE incorporates the performance of VHDL, the fast design time of core generation, and the benefit of not having to know VHDL available with C-tools into one environment. The MBVDE approach is not a comprehensive solution, but

  6. IR FPA sensor characterization and analysis using Matlab tm

    NASA Astrophysics Data System (ADS)

    Burke, Michael J.; Wan, William H.

    1998-08-01

    This paper documents the Matlab routines used to conduct infrared focal plane array (IR-FPA) sensor data analysis. Matlab is a commercially available software package that enables users to conduct a multitude of data analysis, file I/O, and generation of graphics with little or no computer programming skills. This effort was conducted in support of the US Army Tank-automotive and Armaments Command-Armament Research, Development and Engineering Center's (TACOM-ARDEC) 120 mm Precision Guided Mortar Munition (PGMM). PGMM's sensor included a 256 X 256 mid-band IR-FPA. This paper summarizes a primer generated to help train PGMM sensor engineers to use Matlab for conducting IR-FPA image analysis. A brief system description of the PGMM IR sensor will be presented, and follow by discussion on the Matlab IR-FPA image analysis, such as measurement of; FPA operability, Noise Equivalent Temperature Difference, temporal noise, spatial noise, as well as gain and offset calibration for non-uniformity correction.

  7. Deterministic noiseless amplification of coherent states

    NASA Astrophysics Data System (ADS)

    Hu, Meng-Jun; Zhang, Yong-Sheng

    2015-08-01

    A universal deterministic noiseless quantum amplifier has been shown to be impossible. However, probabilistic noiseless amplification of a certain set of states is physically permissible. Regarding quantum state amplification as quantum state transformation, we show that deterministic noiseless amplification of coherent states chosen from a proper set is attainable. The relation between input coherent states and gain of amplification for deterministic noiseless amplification is thus derived. Furthermore, we extend our result to more general situation and show that deterministic noiseless amplification of Gaussian states is also possible. As an example of application, we find that our amplification model can obtain better performance in homodyne detection to measure the phase of state selected from a certain set. Besides, other possible applications are also discussed.

  8. Deterministic transfer function for transionospheric propagation

    SciTech Connect

    Roussel-Dupre, R.; Argo, P.

    1992-01-01

    Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.

  9. Deterministic transfer function for transionospheric propagation

    SciTech Connect

    Roussel-Dupre, R.; Argo, P.

    1992-09-01

    Recent interest in ground-to-satellite propagation of broadband signals has prompted investigation into the development of a transfer function for the ionosphere that includes effects such as dispersion, refraction, changes in polarization, reflection, absorption, and scattering. Depending on the application (e.g. geolocation), it may be necessary to incorporate all of these processes in order to extract the information of interest from the measured transionospheric signal. A transfer function for midlatitudes at VBF from 25--175 MHz is one of the goals of the BLACKBEARD program in characterizing propagation distortion. In support of this program we discuss in this paper an analytic model for the deterministic transfer function of the ionosphere that includes the effects of dispersion, refraction, and changes in polarization to second order in the parameter X = {omega}{sub pe}{sup 2}/{omega}{sup 2} where X is assumed to be small compared to one, {omega}{sub pe} is the peak plasma frequency of the ionosphere, and {omega} is the wave frequency. Analytic expressions for the total phase change, group delay, and polarization change in a spherical geometry assuming a radial, electron density profile are presented. A computer code ITF (Ionospheric Transfer Function) that makes use of the ICED (Ionospheric Conductivity and Electron Density) model to ,venerate electron density profiles was developed to calculate the ionospheric transfer function along a specified transmitter-to-receiver path. Details of this code will be presented as well as comparisons made between ITF analytic results and ray-tracing calculations.

  10. Using Matlab in a Multivariable Calculus Course.

    ERIC Educational Resources Information Center

    Schlatter, Mark D.

    The benefits of high-level mathematics packages such as Matlab include both a computer algebra system and the ability to provide students with concrete visual examples. This paper discusses how both capabilities of Matlab were used in a multivariate calculus class. Graphical user interfaces which display three-dimensional surfaces, contour plots,…

  11. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  12. MATLAB toolbox for functional connectivity.

    PubMed

    Zhou, Dongli; Thompson, Wesley K; Siegle, Greg

    2009-10-01

    The term "functional connectivity" is used to denote correlations in activation among spatially-distinct brain regions, either in a resting state or when processing external stimuli. Functional connectivity has been extensively evaluated with several functional neuroimaging methods, particularly PET and fMRI. Yet these relationships have been quantified using very different measures and the extent to which they index the same constructs is unclear. We have implemented a variety of these functional connectivity measures in a new freely available MATLAB toolbox. These measures are categorized into two groups: whole time-series and trial-based approaches. We evaluate these measures via simulations with different patterns of functional connectivity and provide recommendations for their use. We also apply these measures to a previously published fMRI data set (Siegle, G.J., Thompson, W., Carter, C.S., Steinhauer, S.R., Thase, M.E., 2007. Increased amygdala and decreased dorsolateral prefrontal BOLD responses in unipolar depression: related and independent features. Biol. Psychiatry 610 (2), 198-209) in which activity in dorsal anterior cingulate cortex (dACC) and dorsolateral prefrontal cortex (DLPFC) was evaluated in 32 control subjects during a digit sorting task. Though all implemented measures demonstrate functional connectivity between dACC and DLPFC activity during event-related tasks, different participants appeared to display qualitatively different relationships.

  13. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  14. Optimal partial deterministic quantum teleportation of qubits

    SciTech Connect

    Mista, Ladislav Jr.; Filip, Radim

    2005-02-01

    We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.

  15. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  16. Deterministic Quantization by Dynamical Boundary Conditions

    SciTech Connect

    Dolce, Donatello

    2010-06-15

    We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.

  17. Matlab GUI for a Fluid Mixer

    NASA Technical Reports Server (NTRS)

    Barbieri, Enrique

    2005-01-01

    The Test and Engineering Directorate at NASA John C. Stennis Space Center developed an interest to study the modeling, evaluation, and control of a liquid hydrogen (LH2) and gas hydrogen (GH2) mixer subsystem of a ground test facility. This facility carries out comprehensive ground-based testing and certification of liquid rocket engines including the Space Shuttle Main engine. A software simulation environment developed in MATLAB/SIMULINK (M/S) will allow NASA engineers to test rocket engine systems at relatively no cost. In the progress report submitted in February 2004, we described the development of two foundation programs, a reverse look-up application using various interpolation algorithms, a variety of search and return methods, and self-checking methods to reduce the error in returned search results to increase the functionality of the program. The results showed that these efforts were successful. To transfer this technology to engineers who are not familiar with the M/S environment, a four-module GUI was implemented allowing the user to evaluate the mixer model under open-loop and closed-loop conditions. The progress report was based on an udergraduate Honors Thesis by Ms. Jamie Granger Austin in the Department of Electrical Engineering and Computer Science at Tulane University, during January-May 2003, and her continued efforts during August-December 2003. In collaboration with Dr. Hanz Richter and Dr. Fernando Figueroa we published these results in a NASA Tech Brief due to appear this year. Although the original proposal in 2003 did not address other components of the test facility, we decided in the last few months to extend our research and consider a related pressurization tank component as well. This report summarizes the results obtained towards a Graphical User Interface (GUI) for the evaluation and control of the hydrogen mixer subsystem model and for the pressurization tank each taken individually. Further research would combine the two

  18. A Collection of Nonlinear Aircraft Simulations in MATLAB

    NASA Technical Reports Server (NTRS)

    Garza, Frederico R.; Morelli, Eugene A.

    2003-01-01

    Nonlinear six degree-of-freedom simulations for a variety of aircraft were created using MATLAB. Data for aircraft geometry, aerodynamic characteristics, mass / inertia properties, and engine characteristics were obtained from open literature publications documenting wind tunnel experiments and flight tests. Each nonlinear simulation was implemented within a common framework in MATLAB, and includes an interface with another commercially-available program to read pilot inputs and produce a three-dimensional (3-D) display of the simulated airplane motion. Aircraft simulations include the General Dynamics F-16 Fighting Falcon, Convair F-106B Delta Dart, Grumman F-14 Tomcat, McDonnell Douglas F-4 Phantom, NASA Langley Free-Flying Aircraft for Sub-scale Experimental Research (FASER), NASA HL-20 Lifting Body, NASA / DARPA X-31 Enhanced Fighter Maneuverability Demonstrator, and the Vought A-7 Corsair II. All nonlinear simulations and 3-D displays run in real time in response to pilot inputs, using contemporary desktop personal computer hardware. The simulations can also be run in batch mode. Each nonlinear simulation includes the full nonlinear dynamics of the bare airframe, with a scaled direct connection from pilot inputs to control surface deflections to provide adequate pilot control. Since all the nonlinear simulations are implemented entirely in MATLAB, user-defined control laws can be added in a straightforward fashion, and the simulations are portable across various computing platforms. Routines for trim, linearization, and numerical integration are included. The general nonlinear simulation framework and the specifics for each particular aircraft are documented.

  19. SUNDIALSTB, a MATLAB Interface to SUNDIALS

    SciTech Connect

    Serban, R

    2005-05-09

    SUNDIALS [2], SUite of Nonlinear and DIfferential/ALgebraic equation Solvers, is a family of software tools for integration of ODE and DAE initial value problems and for the solution of nonlinear systems of equations. It consists of CVODE, IDA, and KINSOL, and variants of these with sensitivity analysis capabilities. SUNDIALSTB is a collection of MATLAB functions which provide interfaces to the SUNDIALS solvers. The core of each MATLAB interface in SUNDIALSTB is a single MEX file which interfaces to the various user-callable functions for that solver. However, this MEX file should not be called directly, but rather through the user-callable functions provided for each MATLAB interface. A major design principle for SUNDIALSTB was to provide an interface that is, as much as possible, equally familiar to users of both the SUNDIALS codes and MATLAB. Moreover, we tried to keep the number of user-callable functions to a minimum. For example, the CVODES MATLAB interface contains only 9 such functions, 3 of which interface solely to the adjoint sensitivity module in CVODES. In tune with the MATLAB ODESET function, optional solver inputs in SUNDIALSTB are specified through a single function (CvodeSetOptions for CVODES). However, unlike the ODE solvers in MATLAB, we have kept the more flexible SUNDIALS model in which a separate ''solve'' function (CVodeSolve for CVODES) must be called to return the solution at a desired output time. Solver statistics, as well as optional outputs (such as solution and solution derivatives at additional times) can be obtained at any time with calls to separate functions (CVodeGetStats and CVodeGet for CVODES). This document provides a complete documentation for the SUNDIALSTB functions. For additional details on the methods and underlying SUNDIALS software consult also the corresponding SUNDIALS user guides [3, 1].

  20. Decline in maternal mortality in Matlab, Bangladesh: a cautionary tale.

    PubMed

    Ronsmans, C; Vanneste, A M; Chakraborty, J; van Ginneken, J

    This study examines the impact of the Maternal-Child Health and Family Planning (MCH-FP) program in the Matlab, Bangladesh. Data were obtained from the Matlab surveillance system for treatment and comparison areas. This study reports the trends in maternal mortality since 1976. The MCH-FP area received extensive services in health and family planning since 1977. Services included trained traditional birth attendants and essential obstetric care from government district hospitals and a large number of private clinics. Geographic ease of access to essential obstetric care varied across the study area. Access was most difficult in the northern sector of the MCH-FP area. Contraception was made available through family welfare centers. Tetanus immunization was introduced in 1979. Door-to-door contraceptive services were provided by 80 female community health workers on a twice-monthly basis. In 1987, a community-based maternity care program was added to existing MCH-FP services in the northern treatment area. The demographic surveillance system began collecting data in 1966. During 1976-93 there were 624 maternal deaths among women aged 15-44 years in Matlab (510/100,000 live births). 72.8% of deaths were due to direct obstetric causes: postpartum hemorrhage, induced abortion, eclampsia, dystocia, and postpartum sepsis. Maternal mortality declined in a fluctuating fashion in both treatment and comparison areas. Direct obstetric mortality declined at about 3% per year. After 1987, direct obstetric mortality declined in the north by almost 50%. After the 1990 program expansion in the south, maternal mortality declined, though not significantly, in the south. Maternal mortality declined in the south comparison area during 1987-89 and stabilized. The comparison area of the north showed no decline.

  1. Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.

  2. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  3. Master equation analysis of deterministic chemical chaos

    NASA Astrophysics Data System (ADS)

    Wang, Hongli; Li, Qianshu

    1998-05-01

    The underlying microscopic dynamics of deterministic chemical chaos was investigated in this paper. We analyzed the master equation for the Williamowski-Rössler model by direct stochastic simulation as well as in the generating function representation. Simulation within an ensemble revealed that in the chaotic regime the deterministic mass action kinetics is related neither to the ensemble mean nor to the most probable value within the ensemble. Cumulant expansion analysis of the master equation also showed that the molecular fluctuations do not admit bounded values but increase linearly in time infinitely, indicating the meaninglessness of the chaotic trajectories predicted by the phenomenological equations. These results proposed that the macroscopic description is no longer useful in the chaotic regime and a more microscopic description is necessary in this circumstance.

  4. Deterministic nanoassembly: Neutral or plasma route?

    NASA Astrophysics Data System (ADS)

    Levchenko, I.; Ostrikov, K.; Keidar, M.; Xu, S.

    2006-07-01

    It is shown that, owing to selective delivery of ionic and neutral building blocks directly from the ionized gas phase and via surface migration, plasma environments offer a better deal of deterministic synthesis of ordered nanoassemblies compared to thermal chemical vapor deposition. The results of hybrid Monte Carlo (gas phase) and adatom self-organization (surface) simulation suggest that higher aspect ratios and better size and pattern uniformity of carbon nanotip microemitters can be achieved via the plasma route.

  5. Deterministic Mean-Field Ensemble Kalman Filtering

    SciTech Connect

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d < 2κ. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. Lastly, this is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  6. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d deterministic and standard EnKF. Numerical results support and extend the theory.« less

  7. SAR polar format implementation with MATLAB.

    SciTech Connect

    Martin, Grant D.; Doerry, Armin Walter

    2005-11-01

    Traditional polar format image formation for Synthetic Aperture Radar (SAR) requires a large amount of processing power and memory in order to accomplish in real-time. These requirements can thus eliminate the possible usage of interpreted language environments such as MATLAB. However, with trapezoidal aperture phase history collection and changes to the traditional polar format algorithm, certain optimizations make MATLAB a possible tool for image formation. Thus, this document's purpose is two-fold. The first outlines a change to the existing Polar Format MATLAB implementation utilizing the Chirp Z-Transform that improves performance and memory usage achieving near realtime results for smaller apertures. The second is the addition of two new possible image formation options that perform a more traditional interpolation style image formation. These options allow the continued exploration of possible interpolation methods for image formation and some preliminary results comparing image quality are given.

  8. Application of MATLAB in optical alignment

    NASA Astrophysics Data System (ADS)

    Xiao, Shu; Tang, Yong

    2008-03-01

    The article has mainly introduced a new method in the process of adjusting the average windward area measuring system of cannonball fragment with the aid of MATLAB. The method can not only analyze the amount of deviation qualitatively but also quantitatively, comparing with the traditional method which just can be used for qualitative analyzing. When the measuring system works, four optical axes of CCD cameras should aim at the center point of the universal platform strictly with different object distances and image distances. In the process of assembling and debugging the system, analyzing the image acquired with MATLAB to get the amount of deviation which can be used as gist.

  9. MASCOT - MATLAB Stability and Control Toolbox

    NASA Technical Reports Server (NTRS)

    Kenny, Sean; Crespo, Luis

    2011-01-01

    MASCOT software was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental non-linear equations of motion, MASCOT then calculates vehicle trim and static stability data for any desired flight condition. Common predefined flight conditions are included. The predefined flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind and sideslip, plus three takeoff rotation conditions. Results are displayed through a unique graphical interface developed to provide stability and control information to the conceptual design engineers using a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. This software allows the user to prescribe the vehicle s CG location, mass, and inertia tensor so that any loading configuration between empty weight and maximum take-off weight can be analyzed. The required geometric and aerodynamic data as well as mass and inertia properties may be entered directly, passed through data files, or come from external programs such as Vehicle Sketch Pad (VSP). The current version of MASCOT has been tested with VSP used to compute the required data, which is then passed directly into the program. In VSP, the vehicle geometry is created and manipulated. The aerodynamic coefficients, stability and control derivatives, are calculated using VorLax, which is now available directly within VSP. MASCOT has been written exclusively using the technical computing language MATLAB . This innovation is able to bridge the gap between low-fidelity conceptual design and higher-fidelity stability and control analysis. This new tool enables the conceptual design engineer to include detailed static stability

  10. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    NASA Astrophysics Data System (ADS)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  11. YALINA analytical benchmark analyses using the deterministic ERANOS code system.

    SciTech Connect

    Gohar, Y.; Aliberti, G.; Nuclear Engineering Division

    2009-08-31

    The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.

  12. Deterministic Folding in Stiff Elastic Membranes

    NASA Astrophysics Data System (ADS)

    Tallinen, T.; Åström, J. A.; Timonen, J.

    2008-09-01

    Crumpled membranes have been found to be characterized by complex patterns of spatially seemingly random facets separated by narrow ridges of high elastic energy. We demonstrate by numerical simulations that compression of stiff elastic membranes with small randomness in their initial configurations leads to either random ridge configurations (high entropy) or nearly deterministic folds (low elastic energy). For folding with symmetric ridge configurations to appear in part of the crumpling processes, the crumpling rate must be slow enough. Folding stops when the thickness of the folded structure becomes important, and crumpling continues thereafter as a random process.

  13. Deterministic quantum computation with one photonic qubit

    NASA Astrophysics Data System (ADS)

    Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.

    2015-07-01

    We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.

  14. ACCELERATORS: A GUI tool for beta function measurement using MATLAB

    NASA Astrophysics Data System (ADS)

    Chen, Guang-Ling; Tian, Shun-Qiang; Jiang, Bo-Cheng; Liu, Gui-Min

    2009-04-01

    The beta function measurement is used to detect the shift in the betatron tune as the strength of an individual quadrupole magnet is varied. A GUI (graphic user interface) tool for the beta function measurement is developed using the MATLAB program language in the Linux environment, which facilitates the commissioning of the Shanghai Synchrotron Radiation Facility (SSRF) storage ring. In this paper, we describe the design of the application and give some measuring results and discussions about the definition of the measurement. The program has been optimized to solve some restrictions of the AT tracking code. After the correction with LOCO (linear optics from closed orbits), the horizontal and the vertical root mean square values (rms values) can be reduced to 0.12 and 0.10.

  15. Discrete Deterministic and Stochastic Petri Nets

    NASA Technical Reports Server (NTRS)

    Zijal, Robert; Ciardo, Gianfranco

    1996-01-01

    Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.

  16. Deterministic prediction of surface wind speed variations

    NASA Astrophysics Data System (ADS)

    Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.

    2014-11-01

    Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  17. Deterministic Creation of Macroscopic Cat States

    PubMed Central

    Lombardo, Daniel; Twamley, Jason

    2015-01-01

    Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157

  18. Deterministic forward scatter from surface gravity waves.

    PubMed

    Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale

    2012-12-01

    Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement.

  19. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch

  20. Atmospheric Downscaling using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Venema, Victor; Simmer, Clemens

    2013-04-01

    Coupling models for the different components of the Soil-Vegetation-Atmosphere-System requires up-and downscaling procedures. Subject of our work is the downscaling scheme used to derive high resolution forcing data for land-surface and subsurface models from coarser atmospheric model output. The current downscaling scheme [Schomburg et. al. 2010, 2012] combines a bi-quadratic spline interpolation, deterministic rules and autoregressive noise. For the development of the scheme, training and validation data sets have been created by carrying out high-resolution runs of the atmospheric model. The deterministic rules in this scheme are partly based on known physical relations and partly determined by an automated search for linear relationships between the high resolution fields of the atmospheric model output and high resolution data on surface characteristics. Up to now deterministic rules are available for downscaling surface pressure and partially, depending on the prevailing weather conditions, for near surface temperature and radiation. Aim of our work is to improve those rules and to find deterministic rules for the remaining variables, which require downscaling, e.g. precipitation or near surface specifc humidity. To accomplish that, we broaden the search by allowing for interdependencies between different atmospheric parameters, non-linear relations, non-local and time-lagged relations. To cope with the vast number of possible solutions, we use genetic programming, a method from machine learning, which is based on the principles of natural evolution. We are currently working with GPLAB, a Genetic Programming toolbox for Matlab. At first we have tested the GP system to retrieve the known physical rule for downscaling surface pressure, i.e. the hydrostatic equation, from our training data. We have found this to be a simple task to the GP system. Furthermore we have improved accuracy and efficiency of the GP solution by implementing constant variation and

  1. Systems Biology Toolbox for MATLAB: a computational platform for research in systems biology.

    PubMed

    Schmidt, Henning; Jirstrand, Mats

    2006-02-15

    We present a Systems Biology Toolbox for the widely used general purpose mathematical software MATLAB. The toolbox offers systems biologists an open and extensible environment, in which to explore ideas, prototype and share new algorithms, and build applications for the analysis and simulation of biological and biochemical systems. Additionally it is well suited for educational purposes. The toolbox supports the Systems Biology Markup Language (SBML) by providing an interface for import and export of SBML models. In this way the toolbox connects nicely to other SBML-enabled modelling packages. Models are represented in an internal model format and can be described either by entering ordinary differential equations or, more intuitively, by entering biochemical reaction equations. The toolbox contains a large number of analysis methods, such as deterministic and stochastic simulation, parameter estimation, network identification, parameter sensitivity analysis and bifurcation analysis.

  2. Modelling of Photovoltaic Module Using Matlab Simulink

    NASA Astrophysics Data System (ADS)

    Afiqah Zainal, Nurul; Ajisman; Razlan Yusoff, Ahmad

    2016-02-01

    Photovoltaic (PV) module consists of numbers of photovoltaic cells that are connected in series and parallel used to generate electricity from solar energy. The characteristics of PV module are different based on the model and environment factors. In this paper, simulation of photovoltaic module using Matlab Simulink approach is presented. The method is used to determine the characteristics of PV module in various conditions especially in different level of irradiations and temperature. By having different values of irradiations and temperature, the results showed the output power, voltage and current of PV module can be determined. In addition, all results from Matlab Simulink are verified with theoretical calculation. This proposed model helps in better understanding of PV module characteristics in various environment conditions.

  3. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  4. Matlab as a robust control design tool

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.

    1994-01-01

    This presentation introduces Matlab as a tool used in flight control research. The example used to illustrate some of the capabilities of this software is a robust controller designed for a single stage to orbit air breathing vehicles's ascent to orbit. The global requirements of the controller are to stabilize the vehicle and follow a trajectory in the presence of atmospheric disturbances and strong dynamic coupling between airframe and propulsion.

  5. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  6. MATLAB/Simulink analytic radar modeling environment

    NASA Astrophysics Data System (ADS)

    Esken, Bruce L.; Clayton, Brian L.

    2001-09-01

    Analytic radar models are simulations based on abstract representations of the radar, the RF environment that radar signals are propagated, and the reflections produced by targets, clutter and multipath. These models have traditionally been developed in FORTRAN and have evolved over the last 20 years into efficient and well-accepted codes. However, current models are limited in two primary areas. First, by the nature of algorithm based analytical models, they can be difficult to understand by non-programmers and equally difficult to modify or extend. Second, there is strong interest in re-using these models to support higher-level weapon system and mission level simulations. To address these issues, a model development approach has been demonstrated which utilizes the MATLAB/Simulink graphical development environment. Because the MATLAB/Simulink environment graphically represents model algorithms - thus providing visibility into the model - algorithms can be easily analyzed and modified by engineers and analysts with limited software skills. In addition, software tools have been created that provide for the automatic code generation of C++ objects. These objects are created with well-defined interfaces enabling them to be used by modeling architectures external to the MATLAB/Simulink environment. The approach utilized is generic and can be extended to other engineering fields.

  7. Deterministic polishing from theory to practice

    NASA Astrophysics Data System (ADS)

    Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary

    2015-10-01

    Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.

  8. Inertia and scaling in deterministic lateral displacement.

    PubMed

    Bowman, Timothy J; Drazer, German; Frechette, Joelle

    2013-01-01

    The ability to separate and analyze chemical species with high resolution, sensitivity, and throughput is central to the development of microfluidics systems. Deterministic lateral displacement (DLD) is a continuous separation method based on the transport of species through an array of obstacles. In the case of force-driven DLD (f-DLD), size-based separation can be modelled effectively using a simple particle-obstacle collision model. We use a macroscopic model to study f-DLD and demonstrate, via a simple scaling, that the method is indeed predominantly a size-based phenomenon at low Reynolds numbers. More importantly, we demonstrate that inertia effects provide the additional capability to separate same size particles but of different densities and could enhance separation at high throughput conditions. We also show that a direct conversion of macroscopic results to microfluidic settings is possible with a simple scaling based on the size of the obstacles that results in a universal curve.

  9. Deterministic phase slips in mesoscopic superconducting rings

    NASA Astrophysics Data System (ADS)

    Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.

    2016-11-01

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.

  10. Deterministic multi-zone ice accretion modeling

    NASA Technical Reports Server (NTRS)

    Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael

    1991-01-01

    The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.

  11. Deterministic remote preparation via the Brown state

    NASA Astrophysics Data System (ADS)

    Ma, Song-Ya; Gao, Cong; Zhang, Pei; Qu, Zhi-Guo

    2017-04-01

    We propose two deterministic remote state preparation (DRSP) schemes by using the Brown state as the entangled channel. Firstly, the remote preparation of an arbitrary two-qubit state is considered. It is worth mentioning that the construction of measurement bases plays a key role in our scheme. Then, the remote preparation of an arbitrary three-qubit state is investigated. The proposed schemes can be extended to controlled remote state preparation (CRSP) with unit success probabilities. At variance with the existing CRSP schemes via the Brown state, the derived schemes have no restriction on the coefficients, while the success probabilities can reach 100%. It means the success probabilities are greatly improved. Moreover, we pay attention to the DRSP in noisy environments under two important decoherence models, the amplitude-damping noise and phase-damping noise.

  12. Block variables for deterministic aperiodic sequences

    NASA Astrophysics Data System (ADS)

    Hörnquist, Michael

    1997-10-01

    We use the concept of block variables to obtain a measure of order/disorder for some one-dimensional deterministic aperiodic sequences. For the Thue - Morse sequence, the Rudin - Shapiro sequence and the period-doubling sequence it is possible to obtain analytical expressions in the limit of infinite sequences. For the Fibonacci sequence, we present some analytical results which can be supported by numerical arguments. It turns out that the block variables show a wide range of different behaviour, some of them indicating that some of the considered sequences are more `random' than other. However, the method does not give any definite answer to the question of which sequence is more disordered than the other and, in this sense, the results obtained are negative. We compare this with some other ways of measuring the amount of order/disorder in such systems, and there seems to be no direct correspondence between the measures.

  13. Deterministic approaches to coherent diffractive imaging

    NASA Astrophysics Data System (ADS)

    Allen, L. J.; D'Alfonso, A. J.; Martin, A. V.; Morgan, A. J.; Quiney, H. M.

    2016-01-01

    In this review we will consider the retrieval of the wave at the exit surface of an object illuminated by a coherent probe from one or more measured diffraction patterns. These patterns may be taken in the near-field (often referred to as images) or in the far field (the Fraunhofer diffraction pattern, where the wave is the Fourier transform of that at the exit surface). The retrieval of the exit surface wave from such data is an inverse scattering problem. This inverse problem has historically been solved using nonlinear iterative methods, which suffer from convergence and uniqueness issues. Here we review deterministic approaches to obtaining the exit surface wave which ameliorate those problems.

  14. Deterministic phase slips in mesoscopic superconducting rings

    PubMed Central

    Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.

    2016-01-01

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924

  15. Deterministic-random separation in nonstationary regime

    NASA Astrophysics Data System (ADS)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2016-02-01

    In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable

  16. Non-Deterministic Context and Aspect Choice in Russian.

    ERIC Educational Resources Information Center

    Koubourlis, Demetrius J.

    In any given context, a Russian verb form may be either perfective or imperfective. Perfective aspect signals the completion or result of an action, whereas imperfective does not. Aspect choice is a function of context, and two types of context are distinguished: deterministic and non-deterministic. This paper is part of a larger study whose aim…

  17. Use of deterministic models in sports and exercise biomechanics research.

    PubMed

    Chow, John W; Knudson, Duane V

    2011-09-01

    A deterministic model is a modeling paradigm that determines the relationships between a movement outcome measure and the biomechanical factors that produce such a measure. This review provides an overview of the use of deterministic models in biomechanics research, a historical summary of this research, and an analysis of the advantages and disadvantages of using deterministic models. The deterministic model approach has been utilized in technique analysis over the last three decades, especially in swimming, athletics field events, and gymnastics. In addition to their applications in sports and exercise biomechanics, deterministic models have been applied successfully in research on selected motor skills. The advantage of the deterministic model approach is that it helps to avoid selecting performance or injury variables arbitrarily and to provide the necessary theoretical basis for examining the relative importance of various factors that influence the outcome of a movement task. Several disadvantages of deterministic models, such as the use of subjective measures for the performance outcome, were discussed. It is recommended that exercise and sports biomechanics scholars should consider using deterministic models to help identify meaningful dependent variables in their studies.

  18. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing.

    PubMed

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment.

  19. Application in DSP/FPGA design of Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Liu, Yong-mei; Guan, Yong; Zhang, Jie; Wu, Min-hua; Wu, Lin-wei

    2012-12-01

    As an off-line simulation tool, the modular modelling method of Matlab/Simulik has the features of high efficiency and visualization. In order to realize the fast design and the simulation of prototype systems, the new method of SignalWAVe/Simulink mix modelling is presented, and the Reed-Solomon codec encoder-decoder model is built. Reed-Solomon codec encoder-decoder model is simulated by Simulink. Farther, the C language program and model the. out executable file are created by SignalWAVe RTW Options module, which completes the hard ware co-simulation. The simulation result conforms to the theoretical analysis, thus it has proven the validity and the feasibility of this method.

  20. PSYCHOACOUSTICS: a comprehensive MATLAB toolbox for auditory testing

    PubMed Central

    Soranzo, Alessandro; Grassi, Massimo

    2014-01-01

    PSYCHOACOUSTICS is a new MATLAB toolbox which implements three classic adaptive procedures for auditory threshold estimation. The first includes those of the Staircase family (method of limits, simple up-down and transformed up-down); the second is the Parameter Estimation by Sequential Testing (PEST); and the third is the Maximum Likelihood Procedure (MLP). The toolbox comes with more than twenty built-in experiments each provided with the recommended (default) parameters. However, if desired, these parameters can be modified through an intuitive and user friendly graphical interface and stored for future use (no programming skills are required). Finally, PSYCHOACOUSTICS is very flexible as it comes with several signal generators and can be easily extended for any experiment. PMID:25101013

  1. Documentation generator for VHDL and MatLab source codes for photonic and electronic systems

    NASA Astrophysics Data System (ADS)

    Niton, B.; Pozniak, K. T.; Romaniuk, R. S.

    2011-06-01

    The UML, which is a complex system modeling and description technology, has recently been expanding its uses in the field of formalization and algorithmic approach to such systems like multiprocessor photonic, optoelectronic and advanced electronics carriers; distributed, multichannel measurement systems; optical networks, industrial electronics, novel R&D solutions. The paper describes a new concept of software dedicated for documenting the source codes written in VHDL and MatLab. The work starts with the analysis of available documentation generators for both programming languages, with an emphasis on the open source solutions. There are presented own solutions which base on the Doxygen program available as a free license with the source code. The supporting tools for parsers building were used like Bison and Flex. The documentation generator application is used for design of large optoelectronic and electronic measurement and control systems. The paper consists of three parts which describe the following components of the documentation generator for photonic and electronic systems: concept, MatLab application and VHDL application. This is part one which describes the system concept. Part two describes the MatLab application. MatLab is used for description of the measured phenomena. Part three describes the VHDL application. VHDL is used for behavioral description of the optoelectronic system. All the proposed approach and application documents big, complex software configurations for large systems.

  2. Development and testing of a user-friendly Matlab interface for the JHU turbulence database system

    NASA Astrophysics Data System (ADS)

    Graham, Jason; Frederix, Edo; Meneveau, Charles

    2011-11-01

    One of the challenges that faces researchers today is the ability to store large scale data sets in a way that promotes easy access to the data and sharing among the research community. A public turbulence database cluster has been constructed in which 27 terabytes of a direct numerical simulation of isotropic turbulence is stored (Li et al., 2008, JoT). The public database provides researchers the ability to retrieve subsets of the spatiotemporal data remotely from a client machine anywhere over the internet. In addition to C and Fortran client interfaces, we now present a new Matlab interface based on Matlab's intrinsic SOAP functions. The Matlab interface provides the benefit of a high-level programming language with a plethora of intrinsic functions and toolboxes. In this talk, we will discuss several aspects of the Matlab interface including its development, optimization, usage, and application to the isotropic turbulence data. We will demonstrate several examples (visualizations, statistical analysis, etc) which illustrate the tool. Supported by NSF (CDI-II, CMMI-0941530) and Eindhoven University of Technology's Masters internship program.

  3. Optimal Deterministic Ring Exploration with Oblivious Asynchronous Robots

    NASA Astrophysics Data System (ADS)

    Lamani, Anissa; Potop-Butucaru, Maria Gradinariu; Tixeuil, Sébastien

    We consider the problem of exploring an anonymous unoriented ring of size n by k identical, oblivious, asynchronous mobile robots, that are unable to communicate, yet have the ability to sense their environment and take decisions based on their local view. Previous works in this weak scenario prove that k must not divide n for a deterministic solution to exist. Also, it is known that the minimum number of robots (either deterministic or probabilistic) to explore a ring of size n is 4. An upper bound of 17 robots holds in the deterministic case while 4 probabilistic robots are sufficient. In this paper, we close the complexity gap in the deterministic setting, by proving that no deterministic exploration is feasible with less than five robots, and that five robots are sufficient for any n that is coprime with five. Our protocol completes exploration in O(n) robot moves, which is also optimal.

  4. Ground motion following selection of SRS design basis earthquake and associated deterministic approach

    SciTech Connect

    Not Available

    1991-03-01

    This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart.

  5. Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon

    NASA Astrophysics Data System (ADS)

    Guo, Xianping; Hernández-del-Valle, Adrián; Hernández-Lerma, Onésimo

    2010-09-01

    This article is about nonstationary nonlinear discrete-time deterministic and stochastic control systems with Borel state and control spaces, possibly noncompact control constraint sets, and unbounded costs. The control problem is to minimise an infinite-horizon total cost performance index. Using dynamic programming arguments we show that, under suitable assumptions, the optimal cost functions satisfy optimality equations, which in turn give a procedure to find optimal control policies.

  6. Inductive voltage divider modeling in Matlab

    NASA Astrophysics Data System (ADS)

    Andreev, S. A.; Kim, V. L.

    2017-01-01

    Inductive voltage dividers have the most appropriate metrological characteristics on alternative current and are widely used for converting physical signals. The model of a double-decade inductive voltage divider was designed with the help of Matlab/Simulink. The first decade is an inductive voltage divider with balanced winding, the second decade is a single-stage inductive voltage divider. In the paper, a new transfer function algorithm was given. The study shows errors and differences that appeared between the third degree reduced model and a twenty degree unreduced model. The obtained results of amplitude error differ no more than by 7 % between the reduced and unreduced model.

  7. Analysis of pinching in deterministic particle separation

    NASA Astrophysics Data System (ADS)

    Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German

    2011-11-01

    We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.

  8. 3D deterministic lateral displacement separation systems

    NASA Astrophysics Data System (ADS)

    Du, Siqi; Drazer, German

    2016-11-01

    We present a simple modification to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move parallel to the basal plane of the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems and the observed trajectories can be predicted based on a model developed in the 2D case. More importantly, we analyze the particles out-of-plane motion and observe significant differences in the net displacement depending on particle size. Therefore, taking advantage of both the in-plane and out-of-plane motion of the particles, it is possible to achieve the simultaneous fractionation of a polydisperse suspension into multiple streams. We also discuss other modifications to the obstacle array and driving forces that could enhance separation in microfluidic devices.

  9. Deterministically Driven Avalanche Models of Solar Flares

    NASA Astrophysics Data System (ADS)

    Strugarek, Antoine; Charbonneau, Paul; Joseph, Richard; Pirot, Dorian

    2014-08-01

    We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick-slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy-loading process. The model design leads to a systematic deficit of small-scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global statistics of flare energy release, this latter behavior may be relevant to recurrent flaring in individual coronal loops. This class of models could provide a basis for the prediction of large solar flares.

  10. Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities

    SciTech Connect

    Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.

    2012-03-29

    A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.

  11. Traffic chaotic dynamics modeling and analysis of deterministic network

    NASA Astrophysics Data System (ADS)

    Wu, Weiqiang; Huang, Ning; Wu, Zhitao

    2016-07-01

    Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.

  12. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  13. Surface plasmon field enhancements in deterministic aperiodic structures.

    PubMed

    Shugayev, Roman

    2010-11-22

    In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.

  14. Deterministic versus stochastic trends: Detection and challenges

    NASA Astrophysics Data System (ADS)

    Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.

    2009-09-01

    The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.

  15. Deterministic phase retrieval employing spherical illumination

    NASA Astrophysics Data System (ADS)

    Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.

    2015-05-01

    Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.

  16. Understanding Vertical Jump Potentiation: A Deterministic Model.

    PubMed

    Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L

    2016-06-01

    This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength.

  17. ZERODUR: deterministic approach for strength design

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2012-12-01

    There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two

  18. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    PubMed

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website.

  19. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis

    PubMed Central

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402

  20. Automated optimum design of wing structures. Deterministic and probabilistic approaches

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1982-01-01

    The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.

  1. Object-oriented Matlab adaptive optics toolbox

    NASA Astrophysics Data System (ADS)

    Conan, R.; Correia, C.

    2014-08-01

    Object-Oriented Matlab Adaptive Optics (OOMAO) is a Matlab toolbox dedicated to Adaptive Optics (AO) systems. OOMAO is based on a small set of classes representing the source, atmosphere, telescope, wavefront sensor, Deformable Mirror (DM) and an imager of an AO system. This simple set of classes allows simulating Natural Guide Star (NGS) and Laser Guide Star (LGS) Single Conjugate AO (SCAO) and tomography AO systems on telescopes up to the size of the Extremely Large Telescopes (ELT). The discrete phase screens that make the atmosphere model can be of infinite size, useful for modeling system performance on large time scales. OOMAO comes with its own parametric influence function model to emulate different types of DMs. The cone effect, altitude thickness and intensity profile of LGSs are also reproduced. Both modal and zonal modeling approach are implemented. OOMAO has also an extensive library of theoretical expressions to evaluate the statistical properties of turbulence wavefronts. The main design characteristics of the OOMAO toolbox are object-oriented modularity, vectorized code and transparent parallel computing. OOMAO has been used to simulate and to design the Multi-Object AO prototype Raven at the Subaru telescope and the Laser Tomography AO system of the Giant Magellan Telescope. In this paper, a Laser Tomography AO system on an ELT is simulated with OOMAO. In the first part, we set-up the class parameters and we link the instantiated objects to create the source optical path. Then we build the tomographic reconstructor and write the script for the pseudo-open-loop controller.

  2. Deterministic and Advanced Statistical Modeling of Wind-Driven Sea

    DTIC Science & Technology

    2015-07-06

    COVERED (From - To) 01/09/2010-06/07/2015 4. TITLE AND SUBTITLE Deterministic and advanced statistical modeling of wind-driven sea 5a. CONTRACT...Technical Report Deterministic and advanced statistical modeling of wind-driven sea Vladimir Zakharov, Andrei Pushkarev Waves and Solitons LLC, 1719 W...Development of accurate and fast advanced statistical and dynamical nonlinear models of ocean surface waves, based on first physical principles, which will

  3. Structural deterministic safety factors selection criteria and verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  4. Flexible missile autopilot design studies with PC-MATLAB/386

    NASA Technical Reports Server (NTRS)

    Ruth, Michael J.

    1989-01-01

    Development of a responsive, high-bandwidth missile autopilot for airframes which have structural modes of unusually low frequency presents a challenging design task. Such systems are viable candidates for modern, state-space control design methods. The PC-MATLAB interactive software package provides an environment well-suited to the development of candidate linear control laws for flexible missile autopilots. The strengths of MATLAB include: (1) exceptionally high speed (MATLAB's version for 80386-based PC's offers benchmarks approaching minicomputer and mainframe performance); (2) ability to handle large design models of several hundred degrees of freedom, if necessary; and (3) broad extensibility through user-defined functions. To characterize MATLAB capabilities, a simplified design example is presented. This involves interactive definition of an observer-based state-space compensator for a flexible missile autopilot design task. MATLAB capabilities and limitations, in the context of this design task, are then summarized.

  5. The recursive deterministic perceptron neural network.

    PubMed

    Tajine, Mohamed; Elizondo, David

    1998-12-01

    We introduce a feedforward multilayer neural network which is a generalization of the single layer perceptron topology (SLPT), called recursive deterministic perceptron (RDP). This new model is capable of solving any two-class classification problem, as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets (two subsets X and Y of R(d) are said to be linearly separable if there exists a hyperplane such that the elements of X and Y lie on the two opposite sides of R(d) delimited by this hyperplane). We propose several growing methods for constructing a RDP. These growing methods build a RDP by successively adding intermediate neurons (IN) to the topology (an IN corresponds to a SLPT). Thus, as a result, we obtain a multilayer perceptron topology, which together with the weights, are determined automatically by the constructing algorithms. Each IN augments the affine dimension of the set of input vectors. This augmentation is done by adding the output of each of these INs, as a new component, to every input vector. The construction of a new IN is made by selecting a subset from the set of augmented input vectors which is LS from the rest of this set. This process ends with LS classes in almost n-1 steps where n is the number of input vectors. For this construction, if we assume that the selected LS subsets are of maximum cardinality, the problem is proven to be NP-complete. We also introduce a generalization of the RDP model for classification of m classes (m>2) allowing to always separate m classes. This generalization is based on a new notion of linear separability for m classes, and it follows naturally from the RDP. This new model can be used to compute functions with a finite domain, and thus, to approximate continuous functions. We have also compared - over several classification problems - the percentage of test data correctly classified, or the topology of the 2 and m classes RDPs with that of

  6. Single Ion Implantation and Deterministic Doping

    SciTech Connect

    Schenkel, Thomas

    2010-06-11

    The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion

  7. Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB

    PubMed Central

    Chatziioannou, Aristotelis; Moulos, Panagiotis; Kolisis, Fragiskos N

    2009-01-01

    Background The microarray data analysis realm is ever growing through the development of various tools, open source and commercial. However there is absence of predefined rational algorithmic analysis workflows or batch standardized processing to incorporate all steps, from raw data import up to the derivation of significantly differentially expressed gene lists. This absence obfuscates the analytical procedure and obstructs the massive comparative processing of genomic microarray datasets. Moreover, the solutions provided, heavily depend on the programming skills of the user, whereas in the case of GUI embedded solutions, they do not provide direct support of various raw image analysis formats or a versatile and simultaneously flexible combination of signal processing methods. Results We describe here Gene ARMADA (Automated Robust MicroArray Data Analysis), a MATLAB implemented platform with a Graphical User Interface. This suite integrates all steps of microarray data analysis including automated data import, noise correction and filtering, normalization, statistical selection of differentially expressed genes, clustering, classification and annotation. In its current version, Gene ARMADA fully supports 2 coloured cDNA and Affymetrix oligonucleotide arrays, plus custom arrays for which experimental details are given in tabular form (Excel spreadsheet, comma separated values, tab-delimited text formats). It also supports the analysis of already processed results through its versatile import editor. Besides being fully automated, Gene ARMADA incorporates numerous functionalities of the Statistics and Bioinformatics Toolboxes of MATLAB. In addition, it provides numerous visualization and exploration tools plus customizable export data formats for seamless integration by other analysis tools or MATLAB, for further processing. Gene ARMADA requires MATLAB 7.4 (R2007a) or higher and is also distributed as a stand-alone application with MATLAB Component Runtime

  8. A MATLAB GUI based algorithm for modelling Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Timur, Emre; Onsen, Funda

    2016-04-01

    The magnetotelluric method is an electromagnetic survey technique that images the electrical resistivity distribution of layers in subsurface depths. Magnetotelluric method measures simultaneously total electromagnetic field components such as both time-varying magnetic field B(t) and induced electric field E(t). At the same time, forward modeling of magnetotelluric method is so beneficial for survey planning purpose, for comprehending the method, especially for students, and as part of an iteration process in inverting measured data. The MTINV program can be used to model and to interpret geophysical electromagnetic (EM) magnetotelluric (MT) measurements using a horizontally layered earth model. This program uses either the apparent resistivity and phase components of the MT data together or the apparent resistivity data alone. Parameter optimization, which is based on linearized inversion method, can be utilized in 1D interpretations. In this study, a new MATLAB GUI based algorithm has been written for the 1D-forward modeling of magnetotelluric response function for multiple layers to use in educational studies. The code also includes an automatic Gaussian noise option for a demanded ratio value. Numerous applications were carried out and presented for 2,3 and 4 layer models and obtained theoretical data were interpreted using MTINV, in order to evaluate the initial parameters and effect of noise. Keywords: Education, Forward Modelling, Inverse Modelling, Magnetotelluric

  9. OPTICON: Pro-Matlab software for large order controlled structure design

    NASA Technical Reports Server (NTRS)

    Peterson, Lee D.

    1989-01-01

    A software package for large order controlled structure design is described and demonstrated. The primary program, called OPTICAN, uses both Pro-Matlab M-file routines and selected compiled FORTRAN routines linked into the Pro-Matlab structure. The program accepts structural model information in the form of state-space matrices and performs three basic design functions on the model: (1) open loop analyses; (2) closed loop reduced order controller synthesis; and (3) closed loop stability and performance assessment. The current controller synthesis methods which were implemented in this software are based on the Generalized Linear Quadratic Gaussian theory of Bernstein. In particular, a reduced order Optimal Projection synthesis algorithm based on a homotopy solution method was successfully applied to an experimental truss structure using a 58-state dynamic model. These results are presented and discussed. Current plans to expand the practical size of the design model to several hundred states and the intention to interface Pro-Matlab to a supercomputing environment are discussed.

  10. Arc_Mat: a Matlab-based spatial data analysis toolbox

    NASA Astrophysics Data System (ADS)

    Liu, Xingjian; Lesage, James

    2010-03-01

    This article presents an overview of Arc_Mat, a Matlab-based spatial data analysis software package whose source code has been placed in the public domain. An earlier version of the Arc_Mat toolbox was developed to extract map polygon and database information from ESRI shapefiles and provide high quality mapping in the Matlab software environment. We discuss revisions to the toolbox that: utilize enhanced computing and graphing capabilities of more recent versions of Matlab, restructure the toolbox with object-oriented programming features, and provide more comprehensive functions for spatial data analysis. The Arc_Mat toolbox functionality includes basic choropleth mapping; exploratory spatial data analysis that provides exploratory views of spatial data through various graphs, for example, histogram, Moran scatterplot, three-dimensional scatterplot, density distribution plot, and parallel coordinate plots; and more formal spatial data modeling that draws on the extensive Spatial Econometrics Toolbox functions. A brief review of the design aspects of the revised Arc_Mat is described, and we provide some illustrative examples that highlight representative uses of the toolbox. Finally, we discuss programming with and customizing the Arc_Mat toolbox functionalities.

  11. A Parallel Controls Software Approach for PEP II: AIDA & Matlab Middle Layer

    SciTech Connect

    Wittmer, W.; Colocho, W.; White, G.; /SLAC

    2007-11-06

    The controls software in use at PEP II (Stanford Control Program - SCP) had originally been developed in the eighties. It is very successful in routine operation but due to its internal structure it is difficult and time consuming to extend its functionality. This is problematic during machine development and when solving operational issues. Routinely, data has to be exported from the system, analyzed offline, and calculated settings have to be reimported. Since this is a manual process, it is time consuming and error-prone. Setting up automated processes, as is done for MIA (Model Independent Analysis), is also time consuming and specific to each application. Recently, there has been a trend at light sources to use MATLAB as the platform to control accelerators using a 'MATLAB Middle Layer' (MML), and so called channel access (CA) programs to communicate with the low level control system (LLCS). This has proven very successful, especially during machine development time and trouble shooting. A special CA code, named AIDA (Accelerator Independent Data Access), was developed to handle the communication between MATLAB, modern software frameworks, and the SCP. The MML had to be adapted for implementation at PEP II. Colliders differ significantly in their designs compared to light sources, which poses a challenge. PEP II is the first collider at which this implementation is being done. We will report on this effort, which is still ongoing.

  12. A working memory test battery for MATLAB.

    PubMed

    Lewandowsky, Stephan; Oberauer, Klaus; Yang, Lee-Xieng; Ecker, Ullrich K H

    2010-05-01

    We present a battery of four working memory tasks that are implemented using MATLAB and the free Psychophysics Toolbox. The package includes preprocessing scripts in R and SPSS to facilitate data analysis. The four tasks consist of a sentence-span task, an operation-span task, a spatial short-term memory test, and a memory updating task. These tasks were chosen in order to provide a heterogeneous set of measures of working memory capacity, thus reducing method variance and tapping into two content domains of working memory (verbal, including numerical, vs. spatial) and two of its functional aspects (storage in the context of processing and relational integration). The task battery was validated in three experiments conducted in two languages (English and Chinese), involving more than 350 participants. In all cases, the tasks were found to load on a single latent variable. In a further experiment, the latent working memory variable was found to correlate highly but not perfectly with performance on Raven's matrices test of fluid intelligence. We suggest that the battery constitutes a versatile tool to assess working memory capacity with either English- or Chinese-speaking participants. The battery can be downloaded from www.cogsciwa.com ("Software" button).

  13. Teaching real-time ultrasonic imaging with a 4-channel sonar array, TI C6711 DSK and MATLAB.

    PubMed

    York, George W P; Welch, Thad B; Wright, Cameron H G

    2005-01-01

    Ultrasonic medical imaging courses often stop at the theory or MATLAB simulation level, since professors find it challenging to give the students the experience of designing a real-time ultrasonic system. Some of the practical problems of working with real-time data from the ultrasonic transducers can be avoided by working at lower frequencies (sonar to low ultrasound) range. To facilitate this, we have created a platform using the ease of MATLAB programming with the real-time processing capability of the low-cost Texas Instruments C6711 DSP starter kit and a 4-channel sonar array. With this platform students can design a B-mode or Color-Mode sonar system in the MATLAB environment. This paper will demonstrate how the platform can be used in the classroom to demonstrate the real-time signal processing stages including beamforming, multi-rate sampling, demodulation, filtering, image processing, echo imaging, and Doppler frequency estimation.

  14. MOTO: a Matlab object-oriented programming toolbox for optics

    NASA Astrophysics Data System (ADS)

    Anterrieu, Eric; Pérez, José-Philippe

    2007-06-01

    The ray optics is the branch of optics in which all the wave effects are neglected: the light is considered as travelling along rays which can only change their direction by refraction or reflection. On one hand, a further simplifying approximation can be made if attention is restricted to rays travelling close to the optical axis and at small angles: the well-known linear or paraxial approximation introduced by Gauss. On the other hand, in order to take into account the geometrical aberrations, it is sometimes necessary to pay attention to marginal rays with the aid of a ray tracing procedure. This contribution describes a toolbox for the study of optical systems which implements both approaches. It has been developed in the framework of an educational project, but it is general enough to be useful in most of the cases.

  15. Kinematic analysis of the finger exoskeleton using MATLAB/Simulink.

    PubMed

    Nasiłowski, Krzysztof; Awrejcewicz, Jan; Lewandowski, Donat

    2014-01-01

    A paralyzed and not fully functional part of human body can be supported by the properly designed exoskeleton system with motoric abilities. It can help in rehabilitation, or movement of a disabled/paralyzed limb. Both suitably selected geometry and specialized software are studied applying the MATLAB environment. A finger exoskeleton was the base for MATLAB/Simulink model. Specialized software, such as MATLAB/Simulink give us an opportunity to optimize calculation reaching precise results, which help in next steps of design process. The calculations carried out yield information regarding movement relation between three functionally connected actuators and showed distance and velocity changes during the whole simulation time.

  16. Matlab Tools: An Alternative to Planning Systems in Brachytherapy Treatments

    SciTech Connect

    Herrera, Higmar

    2006-09-08

    This work proposes the use of the Matlab environment to obtain the treatment dose based on the reported data by Krishnaswamy and Liu et al. The comparison with reported measurements is showed for the Amersham source model. For the 3M source model, measurements with TLDs and a Monte Carlo simulation are compared to the data obtained by Matlab. The difference for the Amersham model is well under the 15% recommended by the IAEA and for the 3M model, although the difference is greater, the results are consistent. The good agreement to the reported data allows the Matlab calculations to be used in daily brachytherapy treatments.

  17. Development of a Deterministic Ethernet Building blocks for Space Applications

    NASA Astrophysics Data System (ADS)

    Fidi, C.; Jakovljevic, Mirko

    2015-09-01

    The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.

  18. Graphics development of DCOR: Deterministic combat model of Oak Ridge

    SciTech Connect

    Hunt, G.; Azmy, Y.Y.

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  19. A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.

    PubMed

    Nichols, David F

    2015-01-01

    Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.

  20. Deterministic teleportation of electrons in a quantum dot nanostructure.

    PubMed

    de Visser, R L; Blaauboer, M

    2006-06-23

    We present a proposal for deterministic quantum teleportation of electrons in a semiconductor nanostructure consisting of a single and a double quantum dot. The central issue addressed in this Letter is how to design and implement the most efficient--in terms of the required number of single and two-qubit operations--deterministic teleportation protocol for this system. Using a group-theoretical analysis, we show that deterministic teleportation requires a minimum of three single-qubit rotations and two entangling (square root SWAP) operations. These can be implemented for spin qubits in quantum dots using electron-spin resonance (for single-spin rotations) and exchange interaction (for square root SWAP operations).

  1. Deterministic sensing matrices in compressive sensing: a survey.

    PubMed

    Nguyen, Thu L N; Shin, Yoan

    2013-01-01

    Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.

  2. Estimating the epidemic threshold on networks by deterministic connections

    SciTech Connect

    Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.

  3. Graphics development of DCOR: Deterministic combat model of Oak Ridge. [Deterministic Combat model of Oak Ridge (DCOR)

    SciTech Connect

    Hunt, G. ); Azmy, Y.Y. )

    1992-10-01

    DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.

  4. Parallelization of MATLAB for Euro50 integrated modeling

    NASA Astrophysics Data System (ADS)

    Browne, Michael; Andersen, Torben E.; Enmark, Anita; Moraru, Dan; Shearer, Andrew

    2004-09-01

    MATLAB and its companion product Simulink are commonly used tools in systems modelling and other scientific disciplines. A cross-disciplinary integrated MATLAB model is used to study the overall performance of the proposed 50m optical and infrared telescope, Euro50. However the computational requirements of this kind of end-to-end simulation of the telescope's behaviour, exceeds the capability of an individual contemporary Personal Computer. By parallelizing the model, primarily on a functional basis, it can be implemented across a Beowulf cluster of generic PCs. This requires MATLAB to distribute in some way data and calculations to the cluster nodes and combine completed results. There have been a number of attempts to produce toolkits to allow MATLAB to be used in a parallel fashion. They have used a variety of techniques. Here we present findings from using some of these toolkits and proposed advances.

  5. Complexity of Monte Carlo and deterministic dose-calculation methods.

    PubMed

    Börgers, C

    1998-03-01

    Grid-based deterministic dose-calculation methods for radiotherapy planning require the use of six-dimensional phase space grids. Because of the large number of phase space dimensions, a growing number of medical physicists appear to believe that grid-based deterministic dose-calculation methods are not competitive with Monte Carlo methods. We argue that this conclusion may be premature. Our results do suggest, however, that finite difference or finite element schemes with orders of accuracy greater than one will probably be needed if such methods are to compete well with Monte Carlo methods for dose calculations.

  6. Deterministic and efficient quantum cryptography based on Bell's theorem

    SciTech Connect

    Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg

    2006-05-15

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.

  7. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  8. Deterministic extinction by mixing in cyclically competing species

    NASA Astrophysics Data System (ADS)

    Feldager, Cilie W.; Mitarai, Namiko; Ohta, Hiroki

    2017-03-01

    We consider a cyclically competing species model on a ring with global mixing at finite rate, which corresponds to the well-known Lotka-Volterra equation in the limit of infinite mixing rate. Within a perturbation analysis of the model from the infinite mixing rate, we provide analytical evidence that extinction occurs deterministically at sufficiently large but finite values of the mixing rate for any species number N ≥3 . Further, by focusing on the cases of rather small species numbers, we discuss numerical results concerning the trajectories toward such deterministic extinction, including global bifurcations caused by changing the mixing rate.

  9. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  10. Matlab-Excel Interface for OpenDSS

    SciTech Connect

    2015-04-27

    The software allows users of the OpenDSS grid modeling software to access their load flow models using a GUI interface developed in MATLAB. The circuit definitions are entered into a Microsoft Excel spreadsheet which makes circuit creation and editing a much simpler process than the basic text-based editors used in the native OpenDSS interface. Plot tools have been developed which can be accessed through a MATLAB GUI once the desired parameters have been simulated.

  11. GSGPEs: A MATLAB code for computing the ground state of systems of Gross-Pitaevskii equations

    NASA Astrophysics Data System (ADS)

    Caliari, Marco; Rainer, Stefan

    2013-03-01

    GSGPEs is a Matlab/GNU Octave suite of programs for the computation of the ground state of systems of Gross-Pitaevskii equations. It can compute the ground state in the defocusing case, for any number of equations with harmonic or quasi-harmonic trapping potentials, in spatial dimension one, two or three. The computation is based on a spectral decomposition of the solution into Hermite functions and direct minimization of the energy functional through a Newton-like method with an approximate line-search strategy. Catalogue identifier: AENT_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1417 No. of bytes in distributed program, including test data, etc.: 13673 Distribution format: tar.gz Programming language: Matlab/GNU Octave. Computer: Any supporting Matlab/GNU Octave. Operating system: Any supporting Matlab/GNU Octave. RAM: About 100 MB for a single three-dimensional equation (test run output). Classification: 2.7, 4.9. Nature of problem: A system of Gross-Pitaevskii Equations (GPEs) is used to mathematically model a Bose-Einstein Condensate (BEC) for a mixture of different interacting atomic species. The equations can be used both to compute the ground state solution (i.e., the stationary order parameter that minimizes the energy functional) and to simulate the dynamics. For particular shapes of the traps, three-dimensional BECs can be also simulated by lower dimensional GPEs. Solution method: The ground state of a system of Gross-Pitaevskii equations is computed through a spectral decomposition into Hermite functions and the direct minimization of the energy functional. Running time: About 30 seconds for a single three-dimensional equation with d.o.f. 40 for each spatial direction (test run output).

  12. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  13. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  14. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This

  15. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

  16. Comparison of deterministic and Monte Carlo methods in shielding design.

    PubMed

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  17. Risk-based versus deterministic explosives safety criteria

    SciTech Connect

    Wright, R.E.

    1996-12-01

    The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.

  18. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  19. Deterministic dense coding and faithful teleportation with multipartite graph states

    SciTech Connect

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-05-15

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  20. Deterministic retrieval of complex Green's functions using hard X rays.

    PubMed

    Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E

    2009-01-30

    A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory.

  1. A Unit on Deterministic Chaos for Student Teachers

    ERIC Educational Resources Information Center

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  2. Ground motion following selection of SRS design basis earthquake and associated deterministic approach. Final report: Revision 1

    SciTech Connect

    Not Available

    1991-03-01

    This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section`s Seismic Qualification Program for reactor restart.

  3. TRIAC II. A MatLab code for track measurements from SSNT detectors

    NASA Astrophysics Data System (ADS)

    Patiris, D. L.; Blekas, K.; Ioannides, K. G.

    2007-08-01

    A computer program named TRIAC II written in MATLAB and running with a friendly GUI has been developed for recognition and parameters measurements of particles' tracks from images of Solid State Nuclear Track Detectors. The program, using image analysis tools, counts the number of tracks and depending on the current working mode classifies them according to their radii (Mode I—circular tracks) or their axis (Mode II—elliptical tracks), their mean intensity value (brightness) and their orientation. Images of the detectors' surfaces are input to the code, which generates text files as output, including the number of counted tracks with the associated track parameters. Hough transform techniques are used for the estimation of the number of tracks and their parameters, providing results even in cases of overlapping tracks. Finally, it is possible for the user to obtain informative histograms as well as output files for each image and/or group of images. Program summaryTitle of program:TRIAC II Catalogue identifier:ADZC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZC_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Pentium III, 600 MHz Installations: MATLAB 7.0 Operating system under which the program has been tested: Windows XP Programming language used:MATLAB Memory required to execute with typical data:256 MB No. of bits in a word:32 No. of processors used:one Has the code been vectorized or parallelized?:no No. of lines in distributed program, including test data, etc.:25 964 No. of bytes in distributed program including test data, etc.: 4 354 510 Distribution format:tar.gz Additional comments: This program requires the MatLab Statistical toolbox and the Image Processing Toolbox to be installed. Nature of physical problem: Following the passage of a charged particle (protons and heavier) through a Solid State Nuclear Track Detector (SSNTD), a damage region is created, usually named latent

  4. Chirp Z-transform spectral zoom optimization with MATLAB.

    SciTech Connect

    Martin, Grant D.

    2005-11-01

    The MATLAB language has become a standard for rapid prototyping throughout all disciplines of engineering because the environment is easy to understand and use. Many of the basic functions included in MATLAB are those operations that are necessary to carry out larger algorithms such as the chirp z-transform spectral zoom. These functions include, but are not limited to mathematical operators, logical operators, array indexing, and the Fast Fourier Transform (FFT). However, despite its ease of use, MATLAB's technical computing language is interpreted and thus is not always capable of the memory management and performance of a compiled language. There are however, several optimizations that can be made within the chirp z-transform spectral zoom algorithm itself, and also to the MATLAB implementation in order to take full advantage of the computing environment and lower processing time and improve memory usage. To that end, this document's purpose is two-fold. The first demonstrates how to perform a chirp z-transform spectral zoom as well as an optimization within the algorithm that improves performance and memory usage. The second demonstrates a minor MATLAB language usage technique that can reduce overhead memory costs and improve performance.

  5. Deterministically Polarized Fluorescence from Single Dye Molecules Aligned in Liquid Crystal Host

    SciTech Connect

    Lukishova, S.G.; Schmid, A.W.; Knox, R.; Freivald, P.; Boyd, R. W.; Stroud, Jr., C. R.; Marshall, K.L.

    2005-09-30

    We demonstrated for the first time to our konwledge deterministically polarized fluorescence from single dye molecules. Planar aligned nematic liquid crystal hosts provide deterministic alignment of single dye molecules in a preferred direction.

  6. A flexible software tool for temporally-precise behavioral control in Matlab.

    PubMed

    Asaad, Wael F; Eskandar, Emad N

    2008-09-30

    Systems and cognitive neuroscience depend on carefully designed and precisely implemented behavioral tasks to elicit the neural phenomena of interest. To facilitate this process, we have developed a software system that allows for the straightforward coding and temporally-reliable execution of these tasks in Matlab. We find that, in most cases, millisecond accuracy is attainable, and those instances in which it is not are usually related to predictable, programmed events. In this report, we describe the design of our system, benchmark its performance in a real-world setting, and describe some key features.

  7. MatTAP: A MATLAB toolbox for the control and analysis of movement synchronisation experiments.

    PubMed

    Elliott, Mark T; Welchman, Andrew E; Wing, Alan M

    2009-02-15

    Investigating movement timing and synchronisation at the sub-second range relies on an experimental setup that has high temporal fidelity, is able to deliver output cues and can capture corresponding responses. Modern, multi-tasking operating systems make this increasingly challenging when using standard PC hardware and programming languages. This paper describes a new free suite of tools (available from http://www.snipurl.com/mattap) for use within the MATLAB programming environment, compatible with Microsoft Windows and a range of data acquisition hardware. The toolbox allows flexible generation of timing cues with high temporal accuracy, the capture and automatic storage of corresponding participant responses and an integrated analysis module for the rapid processing of results. A simple graphical user interface is used to navigate the toolbox and so can be operated easily by users not familiar with programming languages. However, it is also fully extensible and customisable, allowing adaptation for individual experiments and facilitating the addition of new modules in future releases. Here we discuss the relevance of the MatTAP (MATLAB Timing Analysis Package) toolbox to current timing experiments and compare its use to alternative methods. We validate the accuracy of the analysis module through comparison to manual observation methods and replicate a previous sensorimotor synchronisation experiment to demonstrate the versatility of the toolbox features demanded by such movement synchronisation paradigms.

  8. Piezoelectric Actuator Modeling Using MSC/NASTRAN and MATLAB

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2003-01-01

    This paper presents a procedure for modeling structures containing piezoelectric actuators using MSCMASTRAN and MATLAB. The paper describes the utility and functionality of one set of validated modeling tools. The tools described herein use MSCMASTRAN to model the structure with piezoelectric actuators and a thermally induced strain to model straining of the actuators due to an applied voltage field. MATLAB scripts are used to assemble the dynamic equations and to generate frequency response functions. The application of these tools is discussed using a cantilever aluminum beam with a surface mounted piezoelectric actuator as a sample problem. Software in the form of MSCINASTRAN DMAP input commands, MATLAB scripts, and a step-by-step procedure to solve the example problem are provided. Analysis results are generated in terms of frequency response functions from deflection and strain data as a function of input voltage to the actuator.

  9. Introduction to multifractal detrended fluctuation analysis in matlab.

    PubMed

    Ihlen, Espen A F

    2012-01-01

    Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra.

  10. Introduction to Multifractal Detrended Fluctuation Analysis in Matlab

    PubMed Central

    Ihlen, Espen A. F.

    2012-01-01

    Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra. PMID:22675302

  11. DNSLab: A gateway to turbulent flow simulation in Matlab

    NASA Astrophysics Data System (ADS)

    Vuorinen, V.; Keskinen, K.

    2016-06-01

    Computational fluid dynamics (CFD) research is increasingly much focused towards computationally intensive, eddy resolving simulation techniques of turbulent flows such as large-eddy simulation (LES) and direct numerical simulation (DNS). Here, we present a compact educational software package called DNSLab, tailored for learning partial differential equations of turbulence from the perspective of DNS in Matlab environment. Based on educational experiences and course feedback from tens of engineering post-graduate students and industrial engineers, DNSLab can offer a major gateway to turbulence simulation with minimal prerequisites. Matlab implementation of two common fractional step projection methods is considered: the 2d Fourier pseudo-spectral method, and the 3d finite difference method with 2nd order spatial accuracy. Both methods are based on vectorization in Matlab and the slow for-loops are thus avoided. DNSLab is tested on two basic problems which we have noted to be of high educational value: 2d periodic array of decaying vortices, and 3d turbulent channel flow at Reτ = 180. To the best of our knowledge, the present study is possibly the first to investigate efficiency of a 3d turbulent, wall bounded flow in Matlab. The accuracy and efficiency of DNSLab is compared with a customized OpenFOAM solver called rk4projectionFoam. Based on our experiences and course feedback, the main contribution of DNSLab consists of the following features. (i) The very compact Matlab implementation of present Navier-Stokes solvers provides a gateway to efficient learning of both, physics of turbulent flows, and simulation of turbulence. (ii) Only relatively minor prerequisites on fluid dynamics and numerical methods are required for using DNSLab. (iii) In 2d, interactive results for turbulent flow cases can be obtained. Even for a 3d channel flow, the solver is fast enough for nearly interactive educational use. (iv) DNSLab is made openly available and thus contributing to

  12. Approaches to implementing deterministic models in a probabilistic framework

    SciTech Connect

    Talbott, D.V.

    1995-04-01

    The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.

  13. Deterministic algorithm with agglomerative heuristic for location problems

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L.; Stupina, A.

    2015-10-01

    Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.

  14. Deterministic control of ferroelastic switching in multiferroic materials.

    PubMed

    Balke, N; Choudhury, S; Jesse, S; Huijben, M; Chu, Y H; Baddorf, A P; Chen, L Q; Ramesh, R; Kalinin, S V

    2009-12-01

    Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate deterministic ferroelastic switching in rhombohedral BiFeO(3) by domain nucleation with a scanning probe. We are able to select among final states that have the same electrostatic energy, but differ dramatically in elastic or magnetic order, by applying voltage to the probe while it is in lateral motion. We also demonstrate the controlled creation of a ferrotoroidal order parameter. The ability to control local elastic, magnetic and torroidal order parameters with an electric field will make it possible to probe local strain and magnetic ordering, and engineer various magnetoelectric, domain-wall-based and strain-coupled devices.

  15. Towards a quasi-deterministic single-photon source

    NASA Astrophysics Data System (ADS)

    Peters, N. A.; Arnold, K. J.; VanDevender, A. P.; Jeffrey, E. R.; Rangarajan, R.; Hosten, O.; Barreiro, J. T.; Altepeter, J. B.; Kwiat, P. G.

    2006-08-01

    A source of single photons allows secure quantum key distribution, in addition, to being a critical resource for linear optics quantum computing. We describe our progress on deterministically creating single photons from spontaneous parametric downconversion, an extension of the Pittman, Jacobs and Franson scheme [Phys. Rev A, v66, 042303 (2002)]. Their idea was to conditionally prepare single photons by measuring one member of a spontaneously emitted photon pair and storing the remaining conditionally prepared photon until a predetermined time, when it would be "deterministically" released from storage. Our approach attempts to improve upon this by recycling the pump pulse in order to decrease the possibility of multiple-pair generation, while maintaining a high probability of producing a single pair. Many of the challenges we discuss are central to other quantum information technologies, including the need for low-loss optical storage, switching and detection, and fast feed-forward control.

  16. Deterministic error correction for nonlocal spatial-polarization hyperentanglement.

    PubMed

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-10

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  17. Deterministic remote two-qubit state preparation in dissipative environments

    NASA Astrophysics Data System (ADS)

    Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.

    2016-05-01

    We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.

  18. Modelling Subsea Coaxial Cable as FIR Filter on MATLAB

    NASA Astrophysics Data System (ADS)

    Kanisin, D.; Nordin, M. S.; Hazrul, M. H.; Kumar, E. A.

    2011-05-01

    The paper presents the modelling of subsea coaxial cable as a FIR filter on MATLAB. The subsea coaxial cables are commonly used in telecommunication industry and, oil and gas industry. Furthermore, this cable is unlike a filter circuit, which is a "lumped network" as individual components appear as discrete items. Therefore, a subsea coaxial network can be represented as a digital filter. In overall, the study has been conducted using MATLAB to model the subsea coaxial channel model base on primary and secondary parameters of subsea coaxial cable.

  19. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  20. A deterministic algorithm for constrained enumeration of transmembrane protein folds.

    SciTech Connect

    Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.

    2004-07-01

    A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.

  1. Uniform Deterministic Discrete Method for three dimensional systems

    NASA Astrophysics Data System (ADS)

    Li, Ben-Wen; Tao, Wen-Quan; Nie, Yu-Hong

    1997-06-01

    For radiative direct exchange areas in three dimensional system, the Uniform Deterministic Discrete Method (UDDM) was adopted. The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs. The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numerical accuracy.

  2. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  3. Pathological tremors: Deterministic chaos or nonlinear stochastic oscillators?

    NASA Astrophysics Data System (ADS)

    Timmer, Jens; Häußler, Siegfried; Lauk, Michael; Lücking, Carl

    2000-02-01

    Pathological tremors exhibit a nonlinear oscillation that is not strictly periodic. We investigate whether the deviation from periodicity is due to nonlinear deterministic chaotic dynamics or due to nonlinear stochastic dynamics. To do so, we apply methods from linear and nonlinear time series analysis to tremor time series. The results of the different methods suggest that the considered types of pathological tremors represent nonlinear stochastic second order processes.

  4. The deterministic SIS epidemic model in a Markovian random environment.

    PubMed

    Economou, Antonis; Lopez-Herrero, Maria Jesus

    2016-07-01

    We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population.

  5. Deterministic chaos control in neural networks on various topologies

    NASA Astrophysics Data System (ADS)

    Neto, A. J. F.; Lima, F. W. S.

    2017-01-01

    Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.

  6. Deterministic generation of remote entanglement with active quantum feedback

    NASA Astrophysics Data System (ADS)

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta

    2015-12-01

    We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.

  7. Probabilistic vs deterministic views in facing natural hazards

    NASA Astrophysics Data System (ADS)

    Arattano, Massimo; Coviello, Velio

    2015-04-01

    Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.

  8. Non-equilibrium Thermodynamics of Piecewise Deterministic Markov Processes

    NASA Astrophysics Data System (ADS)

    Faggionato, A.; Gabrielli, D.; Ribezzi Crivellari, M.

    2009-10-01

    We consider a class of stochastic dynamical systems, called piecewise deterministic Markov processes, with states ( x, σ)∈Ω×Γ, Ω being a region in ℝ d or the d-dimensional torus, Γ being a finite set. The continuous variable x follows a piecewise deterministic dynamics, the discrete variable σ evolves by a stochastic jump dynamics and the two resulting evolutions are fully-coupled. We study stationarity, reversibility and time-reversal symmetries of the process. Increasing the frequency of the σ-jumps, the system behaves asymptotically as deterministic and we investigate the structure of its fluctuations (i.e. deviations from the asymptotic behavior), recovering in a non Markovian frame results obtained by Bertini et al. (Phys. Rev. Lett. 87(4):040601, 2001; J. Stat. Phys. 107(3-4):635-675, 2002; J. Stat. Mech. P07014, 2007; Preprint available online at http://www.arxiv.org/abs/0807.4457, 2008), in the context of Markovian stochastic interacting particle systems. Finally, we discuss a Gallavotti-Cohen-type symmetry relation with involution map different from time-reversal.

  9. How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics?

    NASA Astrophysics Data System (ADS)

    Solé, A.; Oriols, X.; Marian, D.; Zanghì, N.

    2016-10-01

    Bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function. In this theory, the state of a system of particles is defined by the actual positions of the particles and the wave function of the system; and the state of the system evolves deterministically. Thus, the Bohmian state can be compared with the state in classical mechanics, which is given by the positions and momenta of all the particles, and which also evolves deterministically. However, while in classical mechanics it is usually taken for granted and considered unproblematic that the state is, at least in principle, measurable, this is not the case in Bohmian mechanics. Due to the linearity of the quantum dynamical laws, one essential component of the Bohmian state, the wave function, is not directly measurable. Moreover, it turns out that the measurement of the other component of the state — the positions of the particles — must be mediated by the wave function; a fact that in turn implies that the positions of the particles, though measurable, are constrained by absolute uncertainty. This is the key to understanding how Bohmian mechanics, despite being deterministic, can account for all quantum predictions, including quantum randomness and uncertainty.

  10. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    SciTech Connect

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  11. Demographic noise can reverse the direction of deterministic selection

    PubMed Central

    Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.

    2016-01-01

    Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085

  12. Deterministic form correction of extreme freeform optical surfaces

    NASA Astrophysics Data System (ADS)

    Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica

    2015-10-01

    The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.

  13. Deterministic generation of remote entanglement with active quantum feedback

    SciTech Connect

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.

  14. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  15. Potensoft: MATLAB-based software for potential field data processing, modeling and mapping

    NASA Astrophysics Data System (ADS)

    Özgü Arısoy, M.; Dikmen, Ünal

    2011-07-01

    An open-source software including an easy-to-use graphical user interface (GUI) has been developed for processing, modeling and mapping of gravity and magnetic data. The program, called Potensoft, is a set of functions written in MATLAB. The most common application of Potensoft is spatial and frequency domain filtering of gravity and magnetic data. The GUI helps the user easily change all the required parameters. One of the major advantages of the program is to display the input and processed maps in a preview window, thereby allowing the user to track the results during the ongoing process. Source codes can be modified depending on the users' goals. This paper discusses the main features of the program and its capabilities are demonstrated by means of illustrative examples. The main objective is to introduce and ensure usage of the developed package for academic, teaching and professional purposes.

  16. Sub-surface single ion detection in diamond: A path for deterministic color center creation

    NASA Astrophysics Data System (ADS)

    Abraham, John; Aguirre, Brandon; Pacheco, Jose; Camacho, Ryan; Bielejec, Edward; Sandia National Laboratories Team

    Deterministic single color center creation remains a critical milestone for the integrated use of diamond color centers. It depends on three components: focused ion beam implantation to control the location, yield improvement to control the activation, and single ion implantation to control the number of implanted ions. A surface electrode detector has been fabricated on diamond where the electron hole pairs generated during ion implantation are used as the detection signal. Results will be presented demonstrating single ion detection. The detection efficiency of the device will be described as a function of implant energy and device geometry. It is anticipated that the controlled introduction of single dopant atoms in diamond will provide a basis for deterministic single localized color centers. This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy Office of Science. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  17. GUINEVERE experiment: Kinetic analysis of some reactivity measurement methods by deterministic and Monte Carlo codes

    SciTech Connect

    Bianchini, G.; Burgio, N.; Carta, M.; Peluso, V.; Fabrizio, V.; Ricci, L.

    2012-07-01

    The GUINEVERE experiment (Generation of Uninterrupted Intense Neutrons at the lead Venus Reactor) is an experimental program in support of the ADS technology presently carried out at SCK-CEN in Mol (Belgium). In the experiment a modified lay-out of the original thermal VENUS critical facility is coupled to an accelerator, built by the French body CNRS in Grenoble, working in both continuous and pulsed mode and delivering 14 MeV neutrons by bombardment of deuterons on a tritium-target. The modified lay-out of the facility consists of a fast subcritical core made of 30% U-235 enriched metallic Uranium in a lead matrix. Several off-line and on-line reactivity measurement techniques will be investigated during the experimental campaign. This report is focused on the simulation by deterministic (ERANOS French code) and Monte Carlo (MCNPX US code) calculations of three reactivity measurement techniques, Slope ({alpha}-fitting), Area-ratio and Source-jerk, applied to a GUINEVERE subcritical configuration (namely SC1). The inferred reactivity, in dollar units, by the Area-ratio method shows an overall agreement between the two deterministic and Monte Carlo computational approaches, whereas the MCNPX Source-jerk results are affected by large uncertainties and allow only partial conclusions about the comparison. Finally, no particular spatial dependence of the results is observed in the case of the GUINEVERE SC1 subcritical configuration. (authors)

  18. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems

    PubMed Central

    2012-01-01

    Background The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. Results This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. Conclusion The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by

  19. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2016-12-16

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false

  20. Enhancing Teaching using MATLAB Add-Ins for Excel

    ERIC Educational Resources Information Center

    Hamilton, Paul V.

    2004-01-01

    In this paper I will illustrate how to extend the capabilities of Microsoft Excel spreadsheets with add-ins created by MATLAB. Excel provides a broad array of fundamental tools but often comes up short when more sophisticated scenarios are involved. To overcome this short-coming of Excel while retaining its ease of use, I will describe how…

  1. MATLAB: Another Way To Teach the Computer in the Classroom.

    ERIC Educational Resources Information Center

    Marriott, Shaun

    2002-01-01

    Describes a pilot project for MATLAB work in both information communication technology (ICT) and mathematics. The ICT work is on flowcharts and algorithms and discusses ways of communicating with computers. Mathematics lessons involve early algebraic ideas of variables representing numbers. Presents an activity involving number sequences. (KHR)

  2. Development and Validation of Reentry Simulation Using MATLAB

    DTIC Science & Technology

    2006-03-01

    used in the planning for the Mars Airplane (Murray, 2001:3), the aerocapture simulation for the Titan Explorer Mission to the Saturnian system (Way...1980. 17. Way, David W., et al. Aerocapture Simulation and Performance for the Titan Explorer Mission. 2003-4951. American Institute of...DEVELOPMENT AND VALIDATION OF REENTRY SIMULATION USING MATLAB THESIS Robert E Jameson Jr

  3. Autonomous robot vision software design using Matlab toolboxes

    NASA Astrophysics Data System (ADS)

    Tedder, Maurice; Chung, Chan-Jin

    2004-10-01

    The purpose of this paper is to introduce a cost-effective way to design robot vision and control software using Matlab for an autonomous robot designed to compete in the 2004 Intelligent Ground Vehicle Competition (IGVC). The goal of the autonomous challenge event is for the robot to autonomously navigate an outdoor obstacle course bounded by solid and dashed lines on the ground. Visual input data is provided by a DV camcorder at 160 x 120 pixel resolution. The design of this system involved writing an image-processing algorithm using hue, satuaration, and brightness (HSB) color filtering and Matlab image processing functions to extract the centroid, area, and orientation of the connected regions from the scene. These feature vectors are then mapped to linguistic variables that describe the objects in the world environment model. The linguistic variables act as inputs to a fuzzy logic controller designed using the Matlab fuzzy logic toolbox, which provides the knowledge and intelligence component necessary to achieve the desired goal. Java provides the central interface to the robot motion control and image acquisition components. Field test results indicate that the Matlab based solution allows for rapid software design, development and modification of our robot system.

  4. Equilibrium-Staged Separations Using Matlab and Mathematica

    ERIC Educational Resources Information Center

    Binous, Housam

    2008-01-01

    We show a new approach, based on the utilization of Matlab and Mathematica, for solving liquid-liquid extraction and binary distillation problems. In addition, the author shares his experience using these two softwares to teach equilibrium staged separations at the National Institute of Applied Sciences and Technology. (Contains 7 figures.)

  5. MATLAB tensor classes for fast algorithm prototyping : source code.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  6. Realization of Fourier and Fresnel computer-generated holograpm based on MATLAB

    NASA Astrophysics Data System (ADS)

    Lin, GuoQiang; Ren, XueChang

    2016-10-01

    Computer-generated hologram(CGH) can encode the picture. The image, which equals the original object of traditional optics, can be divided into two parts. A portion of it encoding into Fourier computer generated hologram(CGH), while the remaining are coded into Fresnel computer generated hologram. So in the processing of information transmission, the possibility of being stolen details can be greatly reduced. When the image is coded into the Fourier CGH and Fresnel CGH and reached the receiving end, the original image should be obtained by the reconstruction of the two computer generated holograms. This article presents three important things. Firstly, it provides the recording and reconstruction - both of them consist of the holographic technique - of the source program of Fresnel CGH and Fourier CGH in MATLAB. MATLAB(Matrix Laboratory) is the abbreviation of Laboratory Matrix and commercial mathematical software produced by the United States company. Secondly, it isolates the original image and the conjugate image in regeneration of Fourier CGH by using all zero matrix. Even though the original image and the conjugate image can be separated, the two of them also prevent us to acquire the original message. For reserving the most important image, we should apply the window function to filter one of them. Finally, in the coding of Fourier CGH and Fresnel CGH, this passage describes several functions to decrease the noise of the original image which is encoded into program. The function can be available in Fourier CGH and Fresnel CGH.

  7. A deterministic solution of the first order linear Boltzmann transport equation in the presence of external magnetic fields

    SciTech Connect

    St Aubin, J. Keyvanloo, A.; Fallone, B. G.; Vassiliev, O.

    2015-02-15

    Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of today’s treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 × 10 cm{sup 2} field as well as a smaller 2 × 2 cm{sup 2} field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithm’s high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization

  8. EEGVIS: A MATLAB Toolbox for Browsing, Exploring, and Viewing Large Datasets.

    PubMed

    Robbins, Kay A

    2012-01-01

    Recent advances in data monitoring and sensor technology have accelerated the acquisition of very large data sets. Streaming data sets from instrumentation such as multi-channel EEG recording usually must undergo substantial pre-processing and artifact removal. Even when using automated procedures, most scientists engage in laborious manual examination and processing to assure high quality data and to indentify interesting or problematic data segments. Researchers also do not have a convenient method of method of visually assessing the effects of applying any stage in a processing pipeline. EEGVIS is a MATLAB toolbox that allows users to quickly explore multi-channel EEG and other large array-based data sets using multi-scale drill-down techniques. Customizable summary views reveal potentially interesting sections of data, which users can explore further by clicking to examine using detailed viewing components. The viewer and a companion browser are built on our MoBBED framework, which has a library of modular viewing components that can be mixed and matched to best reveal structure. Users can easily create new viewers for their specific data without any programming during the exploration process. These viewers automatically support pan, zoom, resizing of individual components, and cursor exploration. The toolbox can be used directly in MATLAB at any stage in a processing pipeline, as a plug-in for EEGLAB, or as a standalone precompiled application without MATLAB running. EEGVIS and its supporting packages are freely available under the GNU general public license at http://visual.cs.utsa.edu/eegvis.

  9. Interfacing MATLAB and Python Optimizers to Black-Box Environmental Simulation Models

    NASA Astrophysics Data System (ADS)

    Matott, L. S.; Leung, K.; Tolson, B.

    2009-12-01

    A common approach for utilizing environmental models in a management or policy-analysis context is to incorporate them into a simulation-optimization framework - where an underlying process-based environmental model is linked with an optimization search algorithm. The optimization search algorithm iteratively adjusts various model inputs (i.e. parameters or design variables) in order to minimize an application-specific objective function computed on the basis of model outputs (i.e. response variables). Numerous optimization algorithms have been applied to the simulation-optimization of environmental systems and this research investigated the use of optimization libraries and toolboxes that are readily available in MATLAB and Python - two popular high-level programming languages. Inspired by model-independent calibration codes (e.g. PEST and UCODE), a small piece of interface software (known as PIGEON) was developed. PIGEON allows users to interface Python and MATLAB optimizers with arbitrary black-box environmental models without writing any additional interface code. An initial set of benchmark tests (involving more than 20 MATLAB and Python optimization algorithms) were performed to validate the interface software - results highlight the need to carefully consider such issues as numerical precision in output files and enforcement (or not) of parameter limits. Additional benchmark testing considered the problem of fitting isotherm expressions to laboratory data - with an emphasis on dual-mode expressions combining non-linear isotherms with a linear partitioning component. With respect to the selected isotherm fitting problems, derivative-free search algorithms significantly outperformed gradient-based algorithms. Attempts to improve gradient-based performance, via parameter tuning and also via several alternative multi-start approaches, were largely unsuccessful.

  10. Matching solute breakthrough with deterministic and stochastic aquifer models.

    PubMed

    Lemke, Lawrence D; Barrack, William A; Abriola, Linda M; Goovaerts, Pierre

    2004-01-01

    Two different deterministic and two alternative stochastic (i.e., geostatistical) approaches to modeling the distribution of hydraulic conductivity (K) in a nonuniform (sigma2ln(K)) = 0.29) glacial sand aquifer were used to explore the influence of conceptual model selection on simulations of three-dimensional tracer movement. The deterministic K models employed included a homogeneous effective K and a perfectly stratified 14 layer model. Stochastic K models were constructed using sequential Gaussian simulation and sequential i ndicator simulation conditioned to available K values estimated from measured grain size distributions. Standard simulation software packages MODFLOW, MT3DMS, and MODPATH were used to model three-dimensional ground water flow and transport in a field tracer test, where a pulse of bromide was injected through an array of three fully screened wells and extracted through a single fully screened well approximately 8 m away. Agreement between observed and simulated transport behavior was assessed through direct comparison of breakthrough curves (BTCs) and selected breakthrough metrics at the extraction well and at 26 individual multilevel sample ports distributed irregularly between the injection and extraction wells. Results indicate that conceptual models incorporating formation variability are better able to capture observed breakthrough behavior. Root mean square (RMS) error of the deterministic models bracketed the ensemble mean RMS error of stochastic models for simulated concentration vs. time series, but not for individual BTC characteristic metrics. The spatial variability models evaluated here may be better suited to simulating breakthrough behavior measured in wells screened over large intervals than at arbitrarily distributed observation points within a nonuniform aquifer domain.

  11. Optical image encryption technique based on deterministic phase masks

    NASA Astrophysics Data System (ADS)

    Zamrani, Wiam; Ahouzi, Esmail; Lizana, Angel; Campos, Juan; Yzuel, María J.

    2016-10-01

    The double-random phase encoding (DRPE) scheme, which is based on a 4f optical correlator system, is considered as a reference for the optical encryption field. We propose a modification of the classical DRPE scheme based on the use of a class of structured phase masks, the deterministic phase masks. In particular, we propose to conduct the encryption process by using two deterministic phase masks, which are built from linear combinations of several subkeys. For the decryption step, the input image is retrieved by using the complex conjugate of the deterministic phase masks, which were set in the encryption process. This concept of structured masks gives rise to encryption-decryption keys which are smaller and more compact than those required in the classical DRPE. In addition, we show that our method significantly improves the tolerance of the DRPE method to shifts of the decrypting phase mask-when no shift is applied, it provides similar performance to the DRPE scheme in terms of encryption-decryption results. This enhanced tolerance to the shift, which is proven by providing numerical simulation results for grayscale and binary images, may relax the rigidity of an encryption-decryption experimental implementation setup. To evaluate the effectiveness of the described method, the mean-square-error and the peak signal-to-noise ratio between the input images and the recovered images are calculated. Different studies based on simulated data are also provided to highlight the suitability and robustness of the method when applied to the image encryption-decryption processes.

  12. Deterministic side-branching during thermal dendritic growth

    NASA Astrophysics Data System (ADS)

    Mullis, Andrew M.

    2015-06-01

    The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.

  13. Spatial continuity measures for probabilistic and deterministic geostatistics

    SciTech Connect

    Isaaks, E.H.; Srivastava, R.M.

    1988-05-01

    Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.

  14. Linearization Techniques for Controlled Piecewise Deterministic Markov Processes; Application to Zubov's Method

    SciTech Connect

    Goreac, Dan; Serea, Oana-Silvia

    2012-10-15

    We aim at characterizing domains of attraction for controlled piecewise deterministic processes using an occupational measure formulation and Zubov's approach. Firstly, we provide linear programming (primal and dual) formulations of discounted, infinite horizon control problems for PDMPs. These formulations involve an infinite-dimensional set of probability measures and are obtained using viscosity solutions theory. Secondly, these tools allow to construct stabilizing measures and to avoid the assumption of stability under concatenation for controls. The domain of controllability is then characterized as some level set of a convenient solution of the associated Hamilton-Jacobi integral-differential equation. The theoretical results are applied to PDMPs associated to stochastic gene networks. Explicit computations are given for Cook's model for gene expression.

  15. The integrated model for solving the single-period deterministic inventory routing problem

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  16. CALTRANS: A parallel, deterministic, 3D neutronics code

    SciTech Connect

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  17. Deterministic Models of Channel Headwall Erosion: Initiation and Propagation

    DTIC Science & Technology

    1991-06-14

    Port Ocean Div., Amer. Soc. Civil Engr. 106(WW3):369-389. Beltaos , S . 1976 Oblique impingement of plane turbulent jets. J. Hydr. Div. Amer. Soc. Civil...Engrs. 102(HY9): 1177-1192. Beltaos , S . and Rajaratnam. 1973. Plane turbulent impinging jets. J. Hydr. Res. 11:29-59. Bradford, J. M. and R. F. Priest...June 14, 1991 FINAL 7!5T Oy7- /%faq 4. TITLE AND SUBTITLE S . FUNDING NUMB ERS Deterministic Models of Channel Headwall Erosion: Initiation and

  18. Demonstration of deterministic and high fidelity squeezing of quantum information

    SciTech Connect

    Yoshikawa, Jun-ichi; Takei, Nobuyuki; Furusawa, Akira; Hayashi, Toshiki; Akiyama, Takayuki; Huck, Alexander; Andersen, Ulrik L.

    2007-12-15

    By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an ancillary squeezed vacuum state, thus direct interaction between a strong pump and the quantum state is circumvented. We demonstrate three different squeezing levels for a coherent state input. This scheme is highly suitable for the fault-tolerant squeezing transformation in a continuous variable quantum computer.

  19. Deterministic regularization of three-dimensional optical diffraction tomography

    PubMed Central

    Sung, Yongjin; Dasari, Ramachandra R.

    2012-01-01

    In this paper we discuss a deterministic regularization algorithm to handle the missing cone problem of three-dimensional optical diffraction tomography (ODT). The missing cone problem arises in most practical applications of ODT and is responsible for elongation of the reconstructed shape and underestimation of the value of the refractive index. By applying positivity and piecewise-smoothness constraints in an iterative reconstruction framework, we effectively suppress the missing cone artifact and recover sharp edges rounded out by the missing cone, and we significantly improve the accuracy of the predictions of the refractive index. We also show the noise handling capability of our algorithm in the reconstruction process. PMID:21811316

  20. Deterministic shape control in plasma-aided nanotip assembly

    NASA Astrophysics Data System (ADS)

    Tam, E.; Levchenko, I.; Ostrikov, K.

    2006-08-01

    The possibility of deterministic plasma-assisted reshaping of capped cylindrical seed nanotips by manipulating the plasma parameter-dependent sheath width is shown. Multiscale hybrid gas phase/solid surface numerical experiments reveal that under the wide-sheath conditions the nanotips widen at the base and when the sheath is narrow, they sharpen up. By combining the wide- and narrow-sheath stages in a single process, it turns out possible to synthesize wide-base nanotips with long- and narrow-apex spikes, ideal for electron microemitter applications. This plasma-based approach is generic and can be applied to a larger number of multipurpose nanoassemblies.

  1. Deterministic versus stochastic aspects of superexponential population growth models

    NASA Astrophysics Data System (ADS)

    Grosjean, Nicolas; Huillet, Thierry

    2016-08-01

    Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.

  2. The deterministic optical alignment of the HERMES spectrograph

    NASA Astrophysics Data System (ADS)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  3. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  4. Non-deterministic analysis of ocean environment loads

    SciTech Connect

    Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping

    1995-12-31

    Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.

  5. Lasing in an optimized deterministic aperiodic nanobeam cavity

    NASA Astrophysics Data System (ADS)

    Moon, Seul-Ki; Jeong, Kwang-Yong; Noh, Heeso; Yang, Jin-Kyu

    2016-12-01

    We have demonstrated lasing action from partially extended modes in deterministic aperiodic nanobeam cavities inflated by Rudin-Shapiro sequence with two different air holes at room temperature. By varying the size ratio of the holes and hence the structural aperiodicity, different optical lasing modes were obtained with maximized quality factors. The lasing characteristics of the partially extended modes were confirmed by numerical simulations based on scanning microscope images of the fabricated samples. We believe that this partially extended nanobeam modes will be useful for label-free optical biosensors.

  6. Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.

    PubMed

    Chew, Lock Yue

    2012-01-01

    We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.

  7. Deterministic analysis of processes at corroding metal surfaces and the study of electrochemical noise in these systems

    SciTech Connect

    Latanision, R.M.

    1990-12-01

    Electrochemical corrosion is pervasive in virtually all engineering systems and in virtually all industrial circumstances. Although engineers now understand how to design systems to minimize corrosion in many instances, many fundamental questions remain poorly understood and, therefore, the development of corrosion control strategies is based more on empiricism than on a deep understanding of the processes by which metals corrode in electrolytes. Fluctuations in potential, or current, in electrochemical systems have been observed for many years. To date, all investigations of this phenomenon have utilized non-deterministic analyses. In this work it is proposed to study electrochemical noise from a deterministic viewpoint by comparison of experimental parameters, such as first and second order moments (non-deterministic), with computer simulation of corrosion at metal surfaces. In this way it is proposed to analyze the origins of these fluctuations and to elucidate the relationship between these fluctuations and kinetic parameters associated with metal dissolution and cathodic reduction reactions. This research program addresses in essence two areas of interest: (a) computer modeling of corrosion processes in order to study the electrochemical processes on an atomistic scale, and (b) experimental investigations of fluctuations in electrochemical systems and correlation of experimental results with computer modeling. In effect, the noise generated by mathematical modeling will be analyzed and compared to experimental noise in electrochemical systems. 1 fig.

  8. Investigation of Matlab® as platform in navigation and control of an Automatic Guided Vehicle utilising an omnivision sensor.

    PubMed

    Kotze, Ben; Jordaan, Gerrit

    2014-08-25

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.

  9. Hybrid photovoltaic/thermal (PV/T) solar systems simulation with Simulink/Matlab

    SciTech Connect

    da Silva, R.M.; Fernandes, J.L.M.

    2010-12-15

    and perform reasonably well. The Simulink modeling platform has been mainly used worldwide on simulation of control systems, digital signal processing and electric circuits, but there are very few examples of application to solar energy systems modeling. This work uses the modular environment of Simulink/Matlab to model individual PV/T system components, and to assemble the entire installation layout. The results show that the modular approach strategy provided by Matlab/Simulink environment is applicable to solar systems modeling, providing good code scalability, faster developing time, and simpler integration with external computational tools, when compared with traditional imperative-oriented programming languages. (author)

  10. Deterministic composite nanophotonic lattices in large area for broadband applications

    PubMed Central

    Xavier, Jolly; Probst, Jürgen; Becker, Christiane

    2016-01-01

    Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates. PMID:27941869

  11. Directional locking in deterministic lateral-displacement microfluidic separation systems.

    PubMed

    Risbud, Sumedh R; Drazer, German

    2014-07-01

    We analyze the trajectory of suspended spherical particles moving through a square array of obstacles, in the deterministic limit and at zero Reynolds number. We show that in the dilute approximation of widely separated obstacles, the average motion of the particles is equivalent to the trajectory followed by a point particle moving through an array of obstacles with an effective radius. The effective radius accounts for the hydrodynamic as well as short-range repulsive nonhydrodynamic interactions between the suspended particles and the obstacles, and is equal to the critical offset at which particle trajectories become irreversible. Using this equivalent system we demonstrate the presence of directional locking in the trajectory of the particles and derive an inequality that accurately describes the "devil's staircase" type of structure observed in the migration angle as a function of the forcing direction. We use these results to determine the optimum resolution in the fractionation of binary mixtures using deterministic lateral-displacement microfluidic separation systems as well as to comment on the collision frequencies when the arrays of posts are utilized as immunocapture devices.

  12. Deterministic doping and the exploration of spin qubits

    SciTech Connect

    Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.

    2015-01-09

    Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.

  13. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  14. Deterministic nature of the underlying dynamics of surface wind fluctuations

    NASA Astrophysics Data System (ADS)

    Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.

    2012-10-01

    Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.

  15. Stochastic and deterministic causes of streamer branching in liquid dielectrics

    SciTech Connect

    Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl

    2013-08-14

    Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.

  16. On the deterministic and stochastic use of hydrologic models

    USGS Publications Warehouse

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  17. On the deterministic and stochastic use of hydrologic models

    NASA Astrophysics Data System (ADS)

    Farmer, William H.; Vogel, Richard M.

    2016-07-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  18. Deterministic composite nanophotonic lattices in large area for broadband applications

    NASA Astrophysics Data System (ADS)

    Xavier, Jolly; Probst, Jürgen; Becker, Christiane

    2016-12-01

    Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm2) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.

  19. Deterministic photon-emitter coupling in chiral photonic circuits

    NASA Astrophysics Data System (ADS)

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  20. Predictability of normal heart rhythms and deterministic chaos

    NASA Astrophysics Data System (ADS)

    Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.

    1993-04-01

    The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.

  1. An advanced deterministic method for spent fuel criticality safety analysis

    SciTech Connect

    DeHart, M.D.

    1998-01-01

    Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.

  2. Shock-induced explosive chemistry in a deterministic sample configuration.

    SciTech Connect

    Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith

    2005-10-01

    Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.

  3. Deterministic direct reprogramming of somatic cells to pluripotency.

    PubMed

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  4. Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations

    NASA Astrophysics Data System (ADS)

    Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael

    2012-02-01

    We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations

  5. Deterministic composite nanophotonic lattices in large area for broadband applications.

    PubMed

    Xavier, Jolly; Probst, Jürgen; Becker, Christiane

    2016-12-12

    Exotic manipulation of the flow of photons in nanoengineered materials with an aperiodic distribution of nanostructures plays a key role in efficiency-enhanced broadband photonic and plasmonic technologies for spectrally tailorable integrated biosensing, nanostructured thin film solarcells, white light emitting diodes, novel plasmonic ensembles etc. Through a generic deterministic nanotechnological route here we show subwavelength-scale silicon (Si) nanostructures on nanoimprinted glass substrate in large area (4 cm(2)) with advanced functional features of aperiodic composite nanophotonic lattices. These nanophotonic aperiodic lattices have easily tailorable supercell tiles with well-defined and discrete lattice basis elements and they show rich Fourier spectra. The presented nanophotonic lattices are designed functionally akin to two-dimensional aperiodic composite lattices with unconventional flexibility- comprising periodic photonic crystals and/or in-plane photonic quasicrystals as pattern design subsystems. The fabricated composite lattice-structured Si nanostructures are comparatively analyzed with a range of nanophotonic structures with conventional lattice geometries of periodic, disordered random as well as in-plane quasicrystalline photonic lattices with comparable lattice parameters. As a proof of concept of compatibility with advanced bottom-up liquid phase crystallized (LPC) Si thin film fabrication, the experimental structural analysis is further extended to double-side-textured deterministic aperiodic lattice-structured 10 μm thick large area LPC Si film on nanoimprinted substrates.

  6. Deterministic photon-emitter coupling in chiral photonic circuits.

    PubMed

    Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter

    2015-09-01

    Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.

  7. A covariance NMR toolbox for MATLAB and OCTAVE.

    PubMed

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE.

  8. HYDRORECESSION: A Matlab toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega-Esparza, Saúl; Breña-Naranjo, José Agustín; Pedrozo-Acuña, Adrián; Appendini, Christian Mario

    2017-01-01

    Streamflow recession analysis from observed hydrographs allows to extract information about the storage-discharge relationship of a catchment and some of their groundwater hydraulic properties. The HYDRORECESSION toolbox, presented in this paper, is a graphical user interface for Matlab and it was developed to analyse streamflow recession curves with the support of different tools. The software extracts hydrograph recessions segments with three different methods (Vogel, Brutsaert and Aksoy) that are later analysed with four of the most common models to simulate recession curves (Maillet, Boussinesq, Coutagne and Wittenberg) and it includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error). HYDRORECESSION offers tools to parameterize linear and nonlinear storage-outflow relationships and it is useful for regionalization purposes, catchment classification, baseflow separation, hydrological modeling and low flows prediction. HYDRORECESSION is freely available for non-commercial and academic purposes and is available at Matlab File Exchange (http://www.mathworks.com/matlabcentral/fileexchange/51332-hydroecession).

  9. MultiElec: A MATLAB Based Application for MEA Data Analysis.

    PubMed

    Georgiadis, Vassilis; Stephanou, Anastasis; Townsend, Paul A; Jackson, Thomas R

    2015-01-01

    We present MultiElec, an open source MATLAB based application for data analysis of microelectrode array (MEA) recordings. MultiElec displays an extremely user-friendly graphic user interface (GUI) that allows the simultaneous display and analysis of voltage traces for 60 electrodes and includes functions for activation-time determination, the production of activation-time heat maps with activation time and isoline display. Furthermore, local conduction velocities are semi-automatically calculated along with their corresponding vector plots. MultiElec allows ad hoc signal suppression, enabling the user to easily and efficiently handle signal artefacts and for incomplete data sets to be analysed. Voltage traces and heat maps can be simply exported for figure production and presentation. In addition, our platform is able to produce 3D videos of signal progression over all 60 electrodes. Functions are controlled entirely by a single GUI with no need for command line input or any understanding of MATLAB code. MultiElec is open source under the terms of the GNU General Public License as published by the Free Software Foundation, version 3. Both the program and source code are available to download from http://www.cancer.manchester.ac.uk/MultiElec/.

  10. MatLab script to C code converter for embedded processors of FLASH LLRF control system

    NASA Astrophysics Data System (ADS)

    Bujnowski, K.; Siemionczyk, A.; Pucyk, P.; Szewiński, J.; Pożniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The low level RF control system (LLRF) of FEL serves for stabilization of the electromagnetic (EM) field in the superconducting niobium, resonant, microwave cavities and for controlling high power (MW) klystron. LLRF system of FLASH accelerator bases on FPGA technology and embedded microprocessors. Basic and auxiliary functions of the systems are listed as well as used algorithms for superconductive cavity parameters identification. These algorithms were prepared originally in Matlab. The main part of the paper presents implementation of the cavity parameters identification algorithm in a PowerPC processor embedded in the FPGA circuit VirtexIIPro. A construction of a very compact Matlab script converter to C code was presented, referred to as M2C. The application is designed specifically for embedded systems of very confined resources. The generated code is optimized for the weight. The code should be transferable between different hardware platforms. The converter generates a code for Linux and for stand-alone applications. Functional structure of the program was described and the way it is acting. FLEX and BIZON tools were used for construction of the converter. The paper concludes with an example of the M2C application to convert a complex identification algorithm for superconductive cavities in FLASH laser.

  11. Towards the Standardization of a MATLAB-Based Control Systems Laboratory Experience for Undergraduate Students

    SciTech Connect

    Dixon, W.E.

    2001-03-15

    This paper seeks to begin a discussion with regard to developing standardized Computer Aided Control System Design (CACSD) tools that are typically utilized in an undergraduate controls laboratory. The advocated CACSD design tools are based on the popular, commercially available MATLAB environment, the Simulink toolbox, and the Real-Time Workshop toolbox. The primary advantages of the proposed approach are as follows: (1) the required computer hardware is low cost, (2) commercially available plants from different manufacturers can be supported under the same CACSD environment with no hardware modifications, (3) both the Windows and Linux operating systems can be supported via the MATLAB based Real-Time Windows Target and the Quality Real Time Systems (QRTS) based Real-Time Linux Target, and (4) the Simulink block diagram approach can be utilized to prototype control strategies; thereby, eliminating the need for low level programming skills. It is believed that the above advantages related to standardization of the CACSD design tools will facilitate: (1) the sharing of laboratory resources within each university (i.e., between departments) and (2) the development of Internet laboratory experiences for students (i.e., between universities).

  12. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  13. The Biopsychology-Toolbox: a free, open-source Matlab-toolbox for the control of behavioral experiments.

    PubMed

    Rose, Jonas; Otto, Tobias; Dittrich, Lars

    2008-10-30

    The Biopsychology-Toolbox is a free, open-source Matlab-toolbox for the control of behavioral experiments. The major aim of the project was to provide a set of basic tools that allow programming novices to control basic hardware used for behavioral experimentation without limiting the power and flexibility of the underlying programming language. The modular design of the toolbox allows portation of parts as well as entire paradigms between different types of hardware. In addition to the toolbox, this project offers a platform for the exchange of functions, hardware solutions and complete behavioral paradigms.

  14. Using STOQS and stoqstoolbox for in situ Measurement Data Access in Matlab

    NASA Astrophysics Data System (ADS)

    López-Castejón, F.; Schlining, B.; McCann, M. P.

    2012-12-01

    This poster presents the stoqstoolbox, an extension to Matlab that simplifies the loading of in situ measurement data directly from STOQS databases. STOQS (Spatial Temporal Oceanographic Query System) is a geospatial database tool designed to provide efficient access to data following the CF-NetCDF Discrete Samples Geometries convention. Data are loaded from CF-NetCDF files into a STOQS database where indexes are created on depth, spatial coordinates and other parameters, e.g. platform type. STOQS provides consistent, simple and efficient methods to query for data. For example, we can request all measurements with a standard_name of sea_water_temperature between two times and from between two depths. Data access is simpler because the data are retrieved by parameter irrespective of platform or mission file names. Access is more efficient because data are retrieved via the index on depth and only the requested data are retrieved from the database and transferred into the Matlab workspace. Applications in the stoqstoolbox query the STOQS database via an HTTP REST application programming interface; they follow the Data Access Object pattern, enabling highly customizable query construction. Data are loaded into Matlab structures that clearly indicate latitude, longitude, depth, measurement data value, and platform name. The stoqstoolbox is designed to be used in concert with other tools, such as nctoolbox, which can load data from any OPeNDAP data source. With these two toolboxes a user can easily work with in situ and other gridded data, such as from numerical models and remote sensing platforms. In order to show the capability of stoqstoolbox we will show an example of model validation using data collected during the May-June 2012 field experiment conducted by the Monterey Bay Aquarium Research Institute (MBARI) in Monterey Bay, California. The data are available from the STOQS server at http://odss.mbari.org/canon/stoqs_may2012/query/. Over 14 million data points of

  15. Ground Motion and Variability from 3-D Deterministic Broadband Simulations

    NASA Astrophysics Data System (ADS)

    Withers, Kyle Brett

    The accuracy of earthquake source descriptions is a major limitation in high-frequency (> 1 Hz) deterministic ground motion prediction, which is critical for performance-based design by building engineers. With the recent addition of realistic fault topography in 3D simulations of earthquake source models, ground motion can be deterministically calculated more realistically up to higher frequencies. We first introduce a technique to model frequency-dependent attenuation and compare its impact on strong ground motions recorded for the 2008 Chino Hills earthquake. Then, we model dynamic rupture propagation for both a generic strike-slip event and blind thrust scenario earthquakes matching the fault geometry of the 1994 Mw 6.7 Northridge earthquake along rough faults up to 8 Hz. We incorporate frequency-dependent attenuation via a power law above a reference frequency in the form Q0fn, with high accuracy down to Q values of 15, and include nonlinear effects via Drucker-Prager plasticity. We model the region surrounding the fault with and without small-scale medium complexity in both a 1D layered model characteristic of southern California rock and a 3D medium extracted from the SCEC CVMSi.426 including a near-surface geotechnical layer. We find that the spectral acceleration from our models are within 1-2 interevent standard deviations from recent ground motion prediction equations (GMPEs) and compare well with that of recordings from strong ground motion stations at both short and long periods. At periods shorter than 1 second, Q(f) is needed to match the decay of spectral acceleration seen in the GMPEs as a function of distance from the fault. We find that the similarity between the intraevent variability of our simulations and observations increases when small-scale heterogeneity and plasticity are included, extremely important as uncertainty in ground motion estimates dominates the overall uncertainty in seismic risk. In addition to GMPEs, we compare with simple

  16. Additivity Principle in High-Dimensional Deterministic Systems

    NASA Astrophysics Data System (ADS)

    Saito, Keiji; Dhar, Abhishek

    2011-12-01

    The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)PRLTAO0031-900710.1103/PhysRevLett.92.180601], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.

  17. Additivity principle in high-dimensional deterministic systems.

    PubMed

    Saito, Keiji; Dhar, Abhishek

    2011-12-16

    The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed.

  18. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  19. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  20. Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui

    2013-10-01

    The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.

  1. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    SciTech Connect

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  2. Deterministic processes vary during community assembly for ecologically dissimilar taxa

    PubMed Central

    Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.

    2015-01-01

    The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640

  3. Location deterministic biosensing from quantum-dot-nanowire assemblies

    SciTech Connect

    Liu, Chao; Kim, Kwanoh; Fan, D. L.

    2014-08-25

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.

  4. Integrating Clonal Selection and Deterministic Sampling for Efficient Associative Classification

    PubMed Central

    Elsayed, Samir A. Mohamed; Rajasekaran, Sanguthevar; Ammar, Reda A.

    2013-01-01

    Traditional Associative Classification (AC) algorithms typically search for all possible association rules to find a representative subset of those rules. Since the search space of such rules may grow exponentially as the support threshold decreases, the rules discovery process can be computationally expensive. One effective way to tackle this problem is to directly find a set of high-stakes association rules that potentially builds a highly accurate classifier. This paper introduces AC-CS, an AC algorithm that integrates the clonal selection of the immune system along with deterministic data sampling. Upon picking a representative sample of the original data, it proceeds in an evolutionary fashion to populate only rules that are likely to yield good classification accuracy. Empirical results on several real datasets show that the approach generates dramatically less rules than traditional AC algorithms. In addition, the proposed approach is significantly more efficient than traditional AC algorithms while achieving a competitive accuracy. PMID:24500504

  5. Deterministic single-file dynamics in collisional representation.

    PubMed

    Marchesoni, F; Taloni, A

    2007-12-01

    We re-examine numerically the diffusion of a deterministic, or ballistic single file with preassigned velocity distribution (Jepsen's gas) from a collisional viewpoint. For a two-modal velocity distribution, where half the particles have velocity +/-c, the collisional statistics is analytically proven to reproduce the continuous time representation. For a three-modal velocity distribution with equal fractions, where less than 12 of the particles have velocity +/-c, with the remaining particles at rest, the collisional process is shown to be inhomogeneous; its stationary properties are discussed here by combining exact and phenomenological arguments. Collisional memory effects are then related to the negative power-law tails in the velocity autocorrelation functions, predicted earlier in the continuous time formalism. Numerical and analytical results for Gaussian and four-modal Jepsen's gases are also reported for the sake of a comparison.

  6. Fisher-Wright model with deterministic seed bank and selection.

    PubMed

    Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel

    2017-04-01

    Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached.

  7. Reinforcement learning output feedback NN control using deterministic learning technique.

    PubMed

    Xu, Bin; Yang, Chenguang; Shi, Zhongke

    2014-03-01

    In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control.

  8. Deterministic secure communications using two-mode squeezed states

    SciTech Connect

    Marino, Alberto M.; Stroud, C. R. Jr.

    2006-08-15

    We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.

  9. Deterministic entanglement generation from driving through quantum phase transitions

    NASA Astrophysics Data System (ADS)

    Luo, Xin-Yu; Zou, Yi-Quan; Wu, Ling-Na; Liu, Qi; Han, Ming-Fei; Tey, Meng Khoon; You, Li

    2017-02-01

    Many-body entanglement is often created through the system evolution, aided by nonlinear interactions between the constituting particles. These very dynamics, however, can also lead to fluctuations and degradation of the entanglement if the interactions cannot be controlled. Here, we demonstrate near-deterministic generation of an entangled twin-Fock condensate of ~11,000 atoms by driving a rubidium-87 Bose-Einstein condensate undergoing spin mixing through two consecutive quantum phase transitions (QPTs). We directly observe number squeezing of 10.7 ± 0.6 decibels and normalized collective spin length of 0.99 ± 0.01. Together, these observations allow us to infer an entanglement-enhanced phase sensitivity of ~6 decibels beyond the standard quantum limit and an entanglement breadth of ~910 atoms. Our work highlights the power of generating large-scale useful entanglement by taking advantage of the different entanglement landscapes separated by QPTs.

  10. More on exact state reconstruction in deterministic digital control systems

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    Presented is a special form of the Ideal State Reconstructor for deterministic digital control systems which is simpler to implement than the most general form. The Ideal State Reconstructor is so named because, if the plant parameters are known exactly, its output will exactly equal, not just approximate, the true state of the plant and accomplish this without any knowledge of the plant's initial state. Besides this, it adds no new states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects the measurement equation only. It is characterized by the fact that discrete measurements are generated every T/N seconds and input into a multi-input/multi-output moving-average (MA) process. The output of this process is sampled every T seconds and utilized in reconstructing the state of the system.

  11. Safe microburst penetration techniques: A deterministic, nonlinear, optimal control approach

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1987-01-01

    A relatively large amount of computer time was used for the calculation of a optimal trajectory, but it is subject to reduction with moderate effort. The Deterministic, Nonlinear, Optimal Control algorithm yielded excellent aircraft performance in trajectory tracking for the given microburst. It did so by varying the angle of attack to counteract the lift effects of microburst induced airspeed variations. Throttle saturation and aerodynamic stall limits were not a problem for the case considered, proving that the aircraft's performance capabilities were not violated by the given wind field. All closed loop control laws previously considered performed very poorly in comparison, and therefore do not come near to taking full advantage of aircraft performance.

  12. Deterministic generation of a cluster state of entangled photons

    NASA Astrophysics Data System (ADS)

    Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Don, Y.; Gantz, L.; Kenneth, O.; Lindner, N. H.; Gershoni, D.

    2016-10-01

    Photonic cluster states are a resource for quantum computation based solely on single-photon measurements. We use semiconductor quantum dots to deterministically generate long strings of polarization-entangled photons in a cluster state by periodic timed excitation of a precessing matter qubit. In each period, an entangled photon is added to the cluster state formed by the matter qubit and the previously emitted photons. In our prototype device, the qubit is the confined dark exciton, and it produces strings of hundreds of photons in which the entanglement persists over five sequential photons. The measured process map characterizing the device has a fidelity of 0.81 with that of an ideal device. Further feasible improvements of this device may reduce the resources needed for optical quantum information processing.

  13. Capillary-mediated interface perturbations: Deterministic pattern formation

    NASA Astrophysics Data System (ADS)

    Glicksman, Martin E.

    2016-09-01

    Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.

  14. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  15. Deterministic Production of Photon Number States via Quantum Feedback Control

    NASA Astrophysics Data System (ADS)

    Geremia, J. M.

    2006-05-01

    It is well-known that measurements reduce the state of a quantum system, at least approximately, to an eigenstate of the operator associated with the physical property being measured. Here, we employ a continuous measurement of cavity photon number to achieve a robust, nondestructively verifiable procedure for preparing number states of an optical cavity mode. Such Fock states are highly sought after for the enabling role they play in quantum computing, networking and precision metrology. Furthermore, we demonstrate that the particular Fock state produced in each application of the continuous photon number measurement can be controlled using techniques from real-time quantum feedback control. The result of the feedback- stabilized measurement is a deterministic source of (nearly ideal) cavity Fock states. An analysis of feedback stability and the experimental viability of a quantum optical implementation currently underway at the University of New Mexico will be presented.

  16. Working Memory and Its Relation to Deterministic Sequence Learning

    PubMed Central

    Martini, Markus; Furtner, Marco R.; Sachse, Pierre

    2013-01-01

    Is there a relation between working memory (WM) and incidental sequence learning? Nearly all of the earlier investigations in the role of WM capacity (WMC) in sequence learning suggest no correlations in incidental learning conditions. However, the theoretical view of WM and operationalization of WMC made strong progress in recent years. The current study related performance in a coordination and transformation task to sequence knowledge in a four-choice incidental deterministic serial reaction time (SRT) task and a subsequent free generation task. The response-to-stimulus interval (RSI) was varied between 0 ms and 300 ms. Our results show correlations between WMC and error rates in condition RSI 0 ms. For condition RSI 300 ms we found relations between WMC and sequence knowledge in the SRT task as well as between WMC and generation task performance. Theoretical implications of these findings for ongoing processes during sequence learning and retrieval of sequence knowledge are discussed. PMID:23409148

  17. Turning indium oxide into a superior electrocatalyst: deterministic heteroatoms.

    PubMed

    Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P; Zhao, Hui Jun; Yang, Hua Gui

    2013-10-31

    The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.

  18. Conservative deterministic spectral Boltzmann solver near the grazing collisions limit

    NASA Astrophysics Data System (ADS)

    Haack, Jeffrey R.; Gamba, Irene M.

    2012-11-01

    We present new results building on the conservative deterministic spectral method for the space homogeneous Boltzmann equation developed by Gamba and Tharkabhushaman. This approach is a two-step process that acts on the weak form of the Boltzmann equation, and uses the machinery of the Fourier transform to reformulate the collisional integral into a weighted convolution in Fourier space. A constrained optimization problem is solved to preserve the mass, momentum, and energy of the resulting distribution. Within this framework we have extended the formulation to the case of more general case of collision operators with anisotropic scattering mechanisms, which requires a new formulation of the convolution weights. We also derive the grazing collisions limit for the method, and show that it is consistent with the Fokker-Planck-Landau equations as the grazing collisions parameter goes to zero.

  19. Deterministic and stochastic modeling of aquifer stratigraphy, South Carolina

    SciTech Connect

    Miller, R.B.; Castle, J.W.; Temples, T.J.

    2000-04-01

    Deterministic and stochastic methods of three-dimensional hydrogeologic modeling are applied to characterization of contaminated Eocene aquifers at the Savannah River Site, South Carolina. The results address several important issues, including the use of multiple types of data in creating high-resolution aquifer models and the application of sequence-stratigraphic constraints. Specific procedures used include defining grid architecture stratigraphically, upscaling, modeling lithologic properties, and creating multiple equiprobable realizations of aquifer stratigraphy. An important question answered by the study is how to incorporate gamma-ray borehole-geophysical data in areas of anomalous log response, which occurs commonly in aquifers and confining units of the Atlantic Coastal Plain and other areas. To overcome this problem, gamma-ray models were conditioned to grain-size and lithofacies realizations. The investigation contributes to identifying potential pathways for downward migration of contaminants, which have been detected in confined aquifers at the modeling site. The approach followed in this investigation produces quantitative, stratigraphically constrained, geocellular models that incorporate multiple types of data from borehole-geophysical logs and continuous cores. The use of core-based stochastic realizations in conditioning deterministic models provides the advantage of incorporating lithologic information based on direct observations of cores rather than using only indirect measurements from geophysical logs. The high resolution of the models is demonstrated by the representation of thin, discontinuous clay beds that act as local barriers to flow. The models are effective in depicting the contrasts in geometry and heterogeneity between sheet-like nearshore-transgressive sands and laterally discontinuous sands of complex shoreline environments.

  20. Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Yezhe

    This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.

  1. GPELab, a Matlab toolbox to solve Gross-Pitaevskii equations I: Computation of stationary solutions

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Duboscq, Romain

    2014-11-01

    This paper presents GPELab (Gross-Pitaevskii Equation Laboratory), an advanced easy-to-use and flexible Matlab toolbox for numerically simulating many complex physics situations related to Bose-Einstein condensation. The model equation that GPELab solves is the Gross-Pitaevskii equation. The aim of this first part is to present the physical problems and the robust and accurate numerical schemes that are implemented for computing stationary solutions, to show a few computational examples and to explain how the basic GPELab functions work. Problems that can be solved include: 1d, 2d and 3d situations, general potentials, large classes of local and nonlocal nonlinearities, multi-components problems, and fast rotating gases. The toolbox is developed in such a way that other physics applications that require the numerical solution of general Schrödinger-type equations can be considered. Catalogue identifier: AETU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETU_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 552 No. of bytes in distributed program, including test data, etc.: 611 289 Distribution format: tar.gz Programming language: Matlab. Computer: PC, Mac. Operating system: Windows, Mac OS, Linux. Has the code been vectorized or parallelized?: Yes RAM: 4000 Megabytes Classification: 2.7, 4.6, 7.7. Nature of problem: Computing stationary solutions for a class of systems (multi-components) of Gross-Pitaevskii equations in 1d, 2d and 3d. This program is particularly well designed for the computation of ground states of Bose-Einstein condensates as well as dynamics. Solution method: We use the imaginary-time method with a Semi-Implicit Backward Euler scheme, a pseudo-spectral approximation and a Krylov subspace method. Running time: From a few minutes

  2. Spectrum image analysis tool - A flexible MATLAB solution to analyze EEL and CL spectrum images.

    PubMed

    Schmidt, Franz-Philipp; Hofer, Ferdinand; Krenn, Joachim R

    2017-02-01

    Spectrum imaging techniques, gaining simultaneously structural (image) and spectroscopic data, require appropriate and careful processing to extract information of the dataset. In this article we introduce a MATLAB based software that uses three dimensional data (EEL/CL spectrum image in dm3 format (Gatan Inc.'s DigitalMicrograph(®))) as input. A graphical user interface enables a fast and easy mapping of spectral dependent images and position dependent spectra. First, data processing such as background subtraction, deconvolution and denoising, second, multiple display options including an EEL/CL moviemaker and, third, the applicability on a large amount of data sets with a small work load makes this program an interesting tool to visualize otherwise hidden details.

  3. Simulation for Wind Turbine Generators -- With FAST and MATLAB-Simulink Modules

    SciTech Connect

    Singh, M.; Muljadi, E.; Jonkman, J.; Gevorgian, V.; Girsang, I.; Dhupia, J.

    2014-04-01

    This report presents the work done to develop generator and gearbox models in the Matrix Laboratory (MATLAB) environment and couple them to the National Renewable Energy Laboratory's Fatigue, Aerodynamics, Structures, and Turbulence (FAST) program. The goal of this project was to interface the superior aerodynamic and mechanical models of FAST to the excellent electrical generator models found in various Simulink libraries and applications. The scope was limited to Type 1, Type 2, and Type 3 generators and fairly basic gear-train models. Future work will include models of Type 4 generators and more-advanced gear-train models with increased degrees of freedom. As described in this study, implementation of the developed drivetrain model enables the software tool to be used in many ways. Several case studies are presented as examples of the many types of studies that can be performed using this tool.

  4. Arduino-Based Data Acquisition into Excel, LabVIEW, and MATLAB

    NASA Astrophysics Data System (ADS)

    Nichols, Daniel

    2017-04-01

    Data acquisition equipment for physics can be quite expensive. As an alternative, data can be acquired using a low-cost Arduino microcontroller. The Arduino has been used in physics labs where the data are acquired using the Arduino software. The Arduino software, however, does not contain a suite of tools for data fitting and analysis. The data are typically gathered first and then imported manually into an analysis program. There is a way, however, that allows data gathered by the Arduino to be imported in real time into a data analysis package. Illustrated in this article are add-ins for Excel, MATLAB, and LabVIEW that import data directly from the Arduino and allow for real-time plotting and analysis.

  5. GRace: a MATLAB-based application for fitting the discrimination-association model.

    PubMed

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  6. United States Air Force Summer Research Program -- 1993 Summer Research Program Final Reports. Volume 10. Wright Laboratory

    DTIC Science & Technology

    1993-01-01

    matlab program for quick and easy spectral analysis of any particular data set. 1-3 Mathematical Theory The MUSIC and Minimum-Norm spectral estimation...The estimation of the eigenvalues was done with the eig function provided with the Matlab simulation environment(4]. Essentially, the relationship...the variable frequency, 0o, equals the frequency of the sources of the input sequence, then the pseudospectrum will have a peak. The Matlab simulation

  7. Dynamic model of the vergence eye movement system: simulations using MATLAB/SIMULINK.

    PubMed

    Hung, G K

    1998-01-01

    A dynamic model of the vergence eye movement system was developed and simulated using MATLAB/SIMULINK. The model was based on a dual-mode dynamic model previously written in FORTRAN. It consisted of a fast open-loop component and a slow closed-loop component. The new model contained several important modifications. For example, in the fast component, a zero-order hold element replaced the sampler and the target trajectory estimator in the earlier model to provide more stable and accurate responses. Also, a periodicity detector was added to automatically detect periodicity in the stimulus waveform. The stored periodic stimulus, with a reduction in latency, was used to drive the fast component output. Moreover, a connection representing the efference copy signal was added from the fast component output to the disparity input to provide an accurate estimate of the stimulus waveform. Further, Robinson's model of the extraocular muscles replaced the earlier 2nd-order plant to provide more realistic muscle dynamics. The entire model, containing the fast and slow components, was simulated using a variety of stimuli such as pulses, positive and negative ramps, square-wave, and sine-wave. The responses showed dynamic characteristics similar to experimental results. Thus, this new MATLAB/SIMULINK program provides a relatively easy-to-use, versatile, and powerful simulation environment for investigating the basic as well as clinical aspects of vergence dynamics. Moreover, the simulation program has general characteristics that can be modified to represent other oculomotor systems such as the versional and accommodation systems. This provides a framework for future investigation of dynamic interactions between oculomotor systems.

  8. From scale invariance to deterministic chaos in DNA sequences: towards a deterministic description of gene organization in the human genome

    NASA Astrophysics Data System (ADS)

    Nicolay, S.; Brodie of Brodie, E. B.; Touchon, M.; d'Aubenton-Carafa, Y.; Thermes, C.; Arneodo, A.

    2004-10-01

    We use the continuous wavelet transform to perform a space-scale analysis of the AT and GC skews (strand asymmetries) in human genomic sequences, which have been shown to correlate with gene transcription. This study reveals the existence of a characteristic scale ℓ c≃25±10 kb that separates a monofractal long-range correlated noisy regime at small scales (ℓ<ℓ c) from relaxational oscillatory behavior at large-scale (ℓ>ℓ c). We show that these large scale nonlinear oscillations enlighten an organization of the human genome into adjacent domains ( ≈400 kb) with preferential gene orientation. When using classical techniques from dynamical systems theory, we demonstrate that these relaxational oscillations display all the characteristic properties of the chaotic strange attractor behavior observed nearby homoclinic orbits of Shil'nikov type. We discuss the possibility that replication and gene regulation processes are governed by a low-dimensional dynamical system that displays deterministic chaos.

  9. Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?

    NASA Astrophysics Data System (ADS)

    Plag, H.; Earnest, D.; Jules-Plag, S.

    2013-12-01

    suggests an intriguing hypothesis: disaster risk reduction programs need to account for whether they also facilitate the public trust, cooperation, and communication needed to recover from a disaster. Our work in the Hampton Roads area, where the probability of hazardous flooding and inundation events exceeding the thresholds of the infrastructure is high, suggests that to facilitate the paradigm shift from the deterministic to a probabilistic approach, natural sciences have to focus on hazard probabilities, while engineering and social sciences have to work together to understand how interactions of the built and social environments impact robustness and resilience. The current science-policy relationship needs to be augmented by social structures that can learn from previous unexpected events. In this response to climate change, science does not have the primary goal to reduce uncertainties and prediction errors, but rather to develop processes that can utilize uncertainties and surprises to increase robustness, strengthen resilience, and reduce fragility of the social systems during times when infrastructure fails.

  10. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  11. SBEToolbox: A Matlab Toolbox for Biological Network Analysis.

    PubMed

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases.

  12. Parallel distance matrix computation for Matlab data mining

    NASA Astrophysics Data System (ADS)

    Skurowski, Przemysław; Staniszewski, Michał

    2016-06-01

    The paper presents utility functions for computing of a distance matrix, which plays a crucial role in data mining. The goal in the design was to enable operating on relatively large datasets by overcoming basic shortcoming - computing time - with an interface easy to use. The presented solution is a set of functions, which were created with emphasis on practical applicability in real life. The proposed solution is presented along the theoretical background for the performance scaling. Furthermore, different approaches of the parallel computing are analyzed, including shared memory, which is uncommon in Matlab environment.

  13. Causes of maternal mortality decline in Matlab, Bangladesh.

    PubMed

    Chowdhury, Mahbub Elahi; Ahmed, Anisuddin; Kalim, Nahid; Koblinsky, Marge

    2009-04-01

    Bangladesh is distinct among developing countries in achieving a low maternal mortality ratio (MMR) of 322 per 100,000 livebirths despite the very low use of skilled care at delivery (13% nationally). This variation has also been observed in Matlab, a rural area in Bangladesh, where longitudinal data on maternal mortality are available since the mid-1970s. The current study investigated the possible causes of the maternal mortality decline in Matlab. The study analyzed 769 maternal deaths and 215,779 pregnancy records from the Health and Demographic Surveillance System (HDSS) and other sources of safe motherhood data in the ICDDR,B and government service areas in Matlab during 1976-2005. The major interventions that took place in both the areas since the early 1980s were the family-planning programme plus safe menstrual regulation services and safe motherhood interventions (midwives for normal delivery in the ICDDR,B service area from the late 1980s and equal access to comprehensive emergency obstetric care [EmOC] in public facilities for women from both the areas). National programmes for social development and empowerment of women through education and microcredit programmes were implemented in both the areas. The quantitative findings were supplemented by a qualitative study by interviewing local community care providers for their change in practices for maternal healthcare over time. After the introduction of the safe motherhood programme, reduction in maternal mortality was higher in the ICDDR,B service area (68.6%) than in the government service area (50.4%) during 1986-1989 and 2001-2005. Reduction in the number of maternal deaths due to the fertility decline was higher in the government service area (30%) than in the ICDDR,B service area (23%) during 1979-2005. In each area, there has been substantial reduction in abortion-related mortality--86.7% and 78.3%--in the ICDDR,B and government service areas respectively. Education of women was a strong predictor

  14. Multi-Strain Deterministic Chaos in Dengue Epidemiology, A Challenge for Computational Mathematics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Kooi, Bob W.; Stollenwerk, Nico

    2009-09-01

    Recently, we have analysed epidemiological models of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever, known as antibody dependent enhancement (ADE). These models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Especially, Lyapunov spectra, which quantify the predictability horizon in the epidemiological system, are computationally very demanding. We show ways to speed up computations of such Lyapunov spectra by a factor of more than ten by parallelizing previously used sequential C programs. Such fast computations of Lyapunov spectra will be especially of use in future investigations of seasonally forced versions of the present models, as they are needed for data analysis.

  15. ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data

    NASA Astrophysics Data System (ADS)

    Akca, Irfan

    2016-04-01

    ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discre-tized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.

  16. Electromagnetic field enhancement and light localization in deterministic aperiodic nanostructures

    NASA Astrophysics Data System (ADS)

    Gopinath, Ashwin

    The control of light matter interaction in periodic and random media has been investigated in depth during the last few decades, yet structures with controlled degree of disorder such as Deterministic Aperiodic Nano Structures (DANS) have been relatively unexplored. DANS are characterized by non-periodic yet long-range correlated (deterministic) morphologies and can be generated by the mathematical rules of symbolic dynamics and number theory. In this thesis, I have experimentally investigated the unique light transport and localization properties in planar dielectric and metal (plasmonics) DANS. In particular, I have focused on the design, nanofabrication and optical characterization of DANS, formed by arranging metal/dielectric nanoparticles in an aperiodic lattice. This effort is directed towards development of on-chip nanophotonic applications with emphasis on label-free bio-sensing and enhanced light emission. The DANS designed as Surface Enhanced Raman Scattering (SERS) substrate is composed of multi-scale aperiodic nanoparticle arrays fabricated by e-beam lithography and are capable of reproducibly demonstrating enhancement factors as high as ˜107. Further improvement of SERS efficiency is achieved by combining DANS formed by top-down approach with bottom-up reduction of gold nanoparticles, to fabricate novel nanostructures called plasmonic "nano-galaxies" which increases the SERS enhancement factors by 2--3 orders of magnitude while preserving the reproducibility. In this thesis, along with presenting details of fabrication and SERS characterization of these "rationally designed" SERS substrates, I will also present results on using these substrates for detection of DNA nucleobases, as well as reproducible label-free detection of pathogenic bacteria with species specificity. In addition to biochemical detection, the combination of broadband light scattering behavior and the ability for the generation of reproducible high fields in DANS make these

  17. Improved Modeling in a Matlab-Based Navigation System

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Larimore, Wallace E.

    1999-01-01

    An innovative approach to autonomous navigation is available for low earth orbit satellites. The system is developed in Matlab and utilizes an Extended Kalman Filter (EKF) to estimate the attitude and trajectory based on spacecraft magnetometer and gyro data. Preliminary tests of the system with real spacecraft data from the Rossi X-Ray Timing Explorer Satellite (RXTE) indicate the existence of unmodeled errors in the magnetometer data. Incorporating into the EKF a statistical model that describes the colored component of the effective measurement of the magnetic field vector could improve the accuracy of the trajectory and attitude estimates and also improve the convergence time. This model is identified as a first order Markov process. With the addition of the model, the EKF attempts to identify the non-white components of the noise allowing for more accurate estimation of the original state vector, i.e. the orbital elements and the attitude. Working in Matlab allows for easy incorporation of new models into the EKF and the resulting navigation system is generic and can easily be applied to future missions resulting in an alternative in onboard or ground-based navigation.

  18. Influences on pregnancy-termination decisions in Matlab, Bangladesh.

    PubMed

    DaVanzo, Julie; Rahman, Mizanur; Ahmed, Shahabuddin; Razzaque, Abdur

    2013-10-01

    We investigate factors affecting women's decisions to terminate pregnancies in Matlab, Bangladesh, using logistic regression on high-quality data from the Demographic Surveillance System on more than 215,000 pregnancies that occurred between 1978 and 2008. Variables associated with the desire not to have another birth soon (very young and older maternal age, a greater number of living children, the recent birth of twins or of a son, a short interval since a recent live birth) are associated with a greater likelihood of pregnancy termination, and the effects of many of these explanatory variables are stronger in more recent years. Women are less likely to terminate a pregnancy if they don't have any living sons or recently experienced a miscarriage, a stillbirth, or the death of a child. The higher the woman's level of education, the more likely she is to terminate a pregnancy. Between 1982 and the mid-2000s, pregnancy termination was significantly less likely in the area of Matlab with better family planning services.

  19. HEATKAU Program.

    SciTech Connect

    ELDIN NAFEE, SHERIF SALAH

    2013-07-24

    Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time. This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.

  20. Rapid detection of small oscillation faults via deterministic learning.

    PubMed

    Wang, Cong; Chen, Tianrui

    2011-08-01

    Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.

  1. Deterministic lateral displacement for particle separation: a review.

    PubMed

    McGrath, J; Jimenez, M; Bridle, H

    2014-11-07

    Deterministic lateral displacement (DLD), a hydrodynamic, microfluidic technology, was first reported by Huang et al. in 2004 to separate particles on the basis of size in continuous flow with a resolution of down to 10 nm. For 10 years, DLD has been extensively studied, employed and modified by researchers in terms of theory, design, microfabrication and application to develop newer, faster and more efficient tools for separation of millimetre, micrometre and even sub-micrometre sized particles. To extend the range of potential applications, the specific arrangement of geometric features in DLD has also been adapted and/or coupled with external forces (e.g. acoustic, electric, gravitational) to separate particles on the basis of other properties than size such as the shape, deformability and dielectric properties of particles. Furthermore, investigations into DLD performance where inertial and non-Newtonian effects are present have been conducted. However, the evolvement and application of DLD has not yet been reviewed. In this paper, we collate many interesting publications to provide a comprehensive review of the development and diversity of this technology but also provide scope for future direction and detail the fundamentals for those wishing to design such devices for the first time.

  2. Particle separation using virtual deterministic lateral displacement (vDLD).

    PubMed

    Collins, David J; Alan, Tuncay; Neild, Adrian

    2014-05-07

    We present a method for sensitive and tunable particle sorting that we term virtual deterministic lateral displacement (vDLD). The vDLD system is composed of a set of interdigital transducers (IDTs) within a microfluidic chamber that produce a force field at an angle to the flow direction. Particles above a critical diameter, a function of the force induced by viscous drag and the force field, are displaced laterally along the minimum force potential lines, while smaller particles continue in the direction of the fluid flow without substantial perturbations. We demonstrate the effective separation of particles in a continuous-flow system with size sensitivity comparable or better than other previously reported microfluidic separation techniques. Separation of 5.0 μm from 6.6 μm, 6.6 μm from 7.0 μm and 300 nm from 500 nm particles are all achieved using the same device architecture. With the high sensitivity and flexibility vDLD affords we expect to find application in a wide variety of microfluidic platforms.

  3. Method to deterministically study photonic nanostructures in different experimental instruments.

    PubMed

    Husken, B H; Woldering, L A; Blum, C; Vos, W L

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the nanostructure is made during the fabrication of the structure. These maps are made using a series of micrographs with successively decreasing magnifications. The graphs reveal intrinsic and characteristic geometric features that can subsequently be used in different setups to act as markers. As an illustration, we probe surface cavities with radii of 65 nm on a silica opal photonic crystal with various setups: a focused ion beam workstation; a scanning electron microscope (SEM); a wide field optical microscope and a confocal microscope. We use cross-correlation techniques to recover a small area imaged with the SEM in a large area photographed with the optical microscope, which provides a possible avenue to automatic searching. We show how both structural and optical reflectivity data can be obtained from one and the same nanostructure. Since our approach does not use artificial grids or markers, it is of particular interest for samples whose structure is not known a priori, like samples created solely by self-assembly. In addition, our method is not restricted to conducting samples.

  4. Equivalence of deterministic walks on regular lattices on the plane

    NASA Astrophysics Data System (ADS)

    Rechtman, Ana; Rechtman, Raúl

    2017-01-01

    We consider deterministic walks on square, triangular and hexagonal two dimensional lattices. In each case, there is a scatterer at every lattice site that can be in one of two states that forces the walker to turn either to his/her immediate right or left. After the walker is scattered, the scatterer changes state. A lattice with an arrangement of scatterers is an environment. We show that there are only two environments for which the scattering rules are injective, mirrors or rotators, on the three lattices. On hexagonal lattices Webb and Cohen (2014), proved that if a walker with a given initial position and velocity moves through an environment of mirrors (rotators) then there is an environment of rotators (mirrors) through which the walker would move with the same trajectory. We refer to these trajectories on mirror and rotator environments as equivalent walks. We prove the equivalence of walks on square and triangular lattices and include a proof of the equivalence of walks on hexagonal lattices. The proofs are based both on the geometry of the lattice and the structure of the scattering rule.

  5. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  6. Entrepreneurs, Chance, and the Deterministic Concentration of Wealth

    PubMed Central

    Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen

    2011-01-01

    In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540

  7. Insights into the deterministic skill of air quality ensembles ...

    EPA Pesticide Factsheets

    Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati

  8. Estimating interdependences in networks of weakly coupled deterministic systems

    NASA Astrophysics Data System (ADS)

    de Feo, Oscar; Carmeli, Cristian

    2008-02-01

    The extraction of information from measured data about the interactions taking place in a network of systems is a key topic in modern applied sciences. This topic has been traditionally addressed by considering bivariate time series, providing methods which are sometimes difficult to extend to multivariate data, the limiting factor being the computational complexity. Here, we present a computationally viable method based on black-box modeling which, while theoretically applicable only when a deterministic hypothesis about the processes behind the recordings is plausible, proves to work also when this assumption is severely affected. Conceptually, the method is very simple and is composed of three independent steps: in the first step a state-space reconstruction is performed separately on each measured signal; in the second step, a local model, i.e., a nonlinear dynamical system, is fitted separately on each (reconstructed) measured signal; afterward, a linear model of the dynamical interactions is obtained by cross-relating the (reconstructed) measured variables to the dynamics unexplained by the local models. The method is successfully validated on numerically generated data. An assessment of its sensitivity to data length and modeling and measurement noise intensity, and of its applicability to large-scale systems, is also provided.

  9. Deterministic versus evidence-based attitude towards clinical diagnosis.

    PubMed

    Soltani, Akbar; Moayyeri, Alireza

    2007-08-01

    Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.

  10. Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms

    PubMed Central

    Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui

    2013-01-01

    The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future. PMID:24173503

  11. Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers

    PubMed Central

    Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.

    2015-01-01

    An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079

  12. Entrepreneurs, chance, and the deterministic concentration of wealth.

    PubMed

    Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen

    2011-01-01

    In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.

  13. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information

    PubMed Central

    Liu, Desheng; Huang, Zhiping; Zhang, Yimeng; Guo, Xiaojun; Su, Shaojing

    2016-01-01

    Obtaining a minimal automaton is a fundamental issue in the theory and practical implementation of deterministic finite automatons (DFAs). A minimization algorithm is presented in this paper that consists of two main phases. In the first phase, the backward depth information is built, and the state set of the DFA is partitioned into many blocks. In the second phase, the state set is refined using a hash table. The minimization algorithm has a lower time complexity O(n) than a naive comparison of transitions O(n2). Few states need to be refined by the hash table, because most states have been partitioned by the backward depth information in the coarse partition. This method achieves greater generality than previous methods because building the backward depth information is independent of the topological complexity of the DFA. The proposed algorithm can be applied not only to the minimization of acyclic automata or simple cyclic automata, but also to automata with high topological complexity. Overall, the proposal has three advantages: lower time complexity, greater generality, and scalability. A comparison to Hopcroft’s algorithm demonstrates experimentally that the algorithm runs faster than traditional algorithms. PMID:27806102

  14. MATLAB tools for improved characterization and quantification of volcanic incandescence in Webcam imagery; applications at Kilauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren

    2010-01-01

    . These scripts would require minor to moderate modifications for use elsewhere, primarily to customize directory navigation. If the user has some familiarity with MATLAB, or programming in general, these modifications should be easy. Although we originally anticipated needing the Image Processing Toolbox, the scripts in the appendixes do not require it. Thus, only the base installation of MATLAB is needed. Because fairly basic MATLAB functions are used, we expect that the script can be run successfully by versions earlier than 2009b.

  15. CellSegm - a MATLAB toolbox for high-throughput 3D cell segmentation.

    PubMed

    Hodneland, Erlend; Kögel, Tanja; Frei, Dominik Michael; Gerdes, Hans-Hermann; Lundervold, Arvid

    2013-08-09

    : The application of fluorescence microscopy in cell biology often generates a huge amount of imaging data. Automated whole cell segmentation of such data enables the detection and analysis of individual cells, where a manual delineation is often time consuming, or practically not feasible. Furthermore, compared to manual analysis, automation normally has a higher degree of reproducibility. CellSegm, the software presented in this work, is a Matlab based command line software toolbox providing an automated whole cell segmentation of images showing surface stained cells, acquired by fluorescence microscopy. It has options for both fully automated and semi-automated cell segmentation. Major algorithmic steps are: (i) smoothing, (ii) Hessian-based ridge enhancement, (iii) marker-controlled watershed segmentation, and (iv) feature-based classfication of cell candidates. Using a wide selection of image recordings and code snippets, we demonstrate that CellSegm has the ability to detect various types of surface stained cells in 3D. After detection and outlining of individual cells, the cell candidates can be subject to software based analysis, specified and programmed by the end-user, or they can be analyzed by other software tools. A segmentation of tissue samples with appropriate characteristics is also shown to be resolvable in CellSegm. The command-line interface of CellSegm facilitates scripting of the separate tools, all implemented in Matlab, offering a high degree of flexibility and tailored workflows for the end-user. The modularity and scripting capabilities of CellSegm enable automated workflows and quantitative analysis of microscopic data, suited for high-throughput image based screening.

  16. Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations

    NASA Astrophysics Data System (ADS)

    Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.

    2016-02-01

    A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.

  17. Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations

    SciTech Connect

    Edward W. Larson

    2004-02-17

    The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.

  18. Generality of Deterministic Chaos, Exponential Spectra, and Lorentzian Pulses in Magnetically Confined Plasmas

    NASA Astrophysics Data System (ADS)

    Maggs, J. E.; Morales, G. J.

    2011-10-01

    The dynamics of transport at the edge of magnetized plasmas is deterministic chaos. The connection is made by a previous survey [M. A. Pedrosa , Phys. Rev. Lett. 82, 3621 (1999)PRLTAO0031-900710.1103/PhysRevLett.82.3621] of measurements of fluctuations that is shown to exhibit power spectra with exponential frequency dependence over a broad range, which is the signature of deterministic chaos. The exponential character arises from Lorentzian pulses. The results suggest that the generalization to complex times used in studies of deterministic chaos is a representation of Lorentzian pulses emerging from the chaotic dynamics.

  19. Matlab Stability and Control Toolbox: Trim and Static Stability Module

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2006-01-01

    This paper presents the technical background of the Trim and Static module of the Matlab Stability and Control Toolbox. This module performs a low-fidelity stability and control assessment of an aircraft model for a set of flight critical conditions. This is attained by determining if the control authority available for trim is sufficient and if the static stability characteristics are adequate. These conditions can be selected from a prescribed set or can be specified to meet particular requirements. The prescribed set of conditions includes horizontal flight, take-off rotation, landing flare, steady roll, steady turn and pull-up/ push-over flight, for which several operating conditions can be specified. A mathematical model was developed allowing for six-dimensional trim, adjustable inertial properties, asymmetric vehicle layouts, arbitrary number of engines, multi-axial thrust vectoring, engine(s)-out conditions, crosswind and gyroscopic effects.

  20. MATLAB tools for lidar data conversion, visualization, and processing

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Zhou, Kaijing; Yang, Jie; Lu, Yilong

    2011-10-01

    LIDAR (LIght Detection and Ranging) [1] is an optical remote sensing technology that has gained increasing acceptance for topographic mapping. LIDAR technology has higher accuracy than RADAR and has wide applications. The relevant commercial market for LIDAR has developed greatly in the last few years. LAS format is approved to be the standard data format for interchanging LIDAR data among different software developers, manufacturers and end users. LAS data format reduces the data size compared to ASCII data format. However, LAS data file can only be visualized by some expensive commercial software. There are some free tools available, but they are not user-friendly and have less or poor visualization functionality. This makes it difficult for researchers to investigate and use LIDAR data. Therefore, there is a need to develop an efficient and low cost LIDAR data toolbox. For this purpose we have developed a free and efficient Matlab tool for LIDAR data conversion, visualization and processing.

  1. MATLAB script for analyzing and visualizing scanline data

    NASA Astrophysics Data System (ADS)

    Markovaara-Koivisto, M.; Laine, E.

    2012-03-01

    Scanline surveys consist of directional and qualitative measurements of rock discontinuities. These surveys are used in geologic and engineering investigations of fractured rock masses. This paper introduces a new MATLAB script developed for visualizing results from scanline surveys as traces in 2D and disks in 3D. The script is also able to cluster orientation data and to present statistical summaries and to reflect the change in degree of rock brokenness along the scanline. Advantages of this new script are that it can present undulating discontinuities as wavy surfaces and different discontinuity properties using color codes. An intensity rose diagram is utilized to visualize interdependency of certain properties and orientation. This new script has a potential for preprocessing vast amounts of scanline and oriented drill core logging data before using it in 3D discontinuity network modeling. The script is demonstrated using data concerning rock fracturing gathered from a dimension stone quarry in Southern Finland.

  2. Perinatal mortality attributable to complications of childbirth in Matlab, Bangladesh.

    PubMed Central

    Kusiako, T.; Ronsmans, C.; Van der Paal, L.

    2000-01-01

    Very few population-based studies of perinatal mortality in developing countries have examined the role of intrapartum risk factors. In the present study, the proportion of perinatal deaths that are attributable to complications during childbirth in Matlab, Bangladesh, was assessed using community-based data from a home-based programme led by professional midwives between 1987 and 1993. Complications during labour and delivery--such as prolonged or obstructed labour, abnormal fetal position, and hypertensive diseases of pregnancy--increased the risk of perinatal mortality fivefold and accounted for 30% of perinatal deaths. Premature labour, which occurred in 20% of pregnancies, accounted for 27% of perinatal mortality. Better care by qualified staff during delivery and improved care of newborns should substantially reduce perinatal mortality in this study population. PMID:10859856

  3. A smart grid simulation testbed using Matlab/Simulink

    NASA Astrophysics Data System (ADS)

    Mallapuram, Sriharsha; Moulema, Paul; Yu, Wei

    2014-06-01

    The smart grid is the integration of computing and communication technologies into a power grid with a goal of enabling real time control, and a reliable, secure, and efficient energy system [1]. With the increased interest of the research community and stakeholders towards the smart grid, a number of solutions and algorithms have been developed and proposed to address issues related to smart grid operations and functions. Those technologies and solutions need to be tested and validated before implementation using software simulators. In this paper, we developed a general smart grid simulation model in the MATLAB/Simulink environment, which integrates renewable energy resources, energy storage technology, load monitoring and control capability. To demonstrate and validate the effectiveness of our simulation model, we created simulation scenarios and performed simulations using a real-world data set provided by the Pecan Street Research Institute.

  4. Matlab Cluster Ensemble Toolbox v. 1.0

    SciTech Connect

    2009-04-27

    This is a Matlab toolbox for investigating the application of cluster ensembles to data classification, with the objective of improving the accuracy and/or speed of clustering. The toolbox divides the cluster ensemble problem into four areas, providing functionality for each. These include, (1) synthetic data generation, (2) clustering to generate individual data partitions and similarity matrices, (3) consensus function generation and final clustering to generate ensemble data partitioning, and (4) implementation of accuracy metrics. With regard to data generation, Gaussian data of arbitrary dimension can be generated. The kcenters algorithm can then be used to generate individual data partitions by either, (a) subsampling the data and clustering each subsample, or by (b) randomly initializing the algorithm and generating a clustering for each initialization. In either case an overall similarity matrix can be computed using a consensus function operating on the individual similarity matrices. A final clustering can be performed and performance metrics are provided for evaluation purposes.

  5. Polar format algorithm for SAR imaging with Matlab

    NASA Astrophysics Data System (ADS)

    Deming, Ross; Best, Matthew; Farrell, Sean

    2014-06-01

    Due to its computational efficiency, the polar format algorithm (PFA) is considered by many to be the workhorse for airborne synthetic aperture radar (SAR) imaging. PFA is implemented in spatial Fourier space, also known as "K-space", which is a convenient domain for understanding SAR performance metrics, sampling requirements, etc. In this paper the mathematics behind PFA are explained and computed examples are presented, both using simulated data, and experimental airborne radar data from the Air Force Research Laboratory (AFRL) Gotcha Challenge collect. In addition, a simple graphical method is described that can be used to model and predict wavefront curvature artifacts in PFA imagery, which are due to the limited validity of the underlying far-field approximation. The appendix includes Matlab code for computing SAR images using PFA.

  6. Automated leukocyte processing by microfluidic deterministic lateral displacement.

    PubMed

    Civin, Curt I; Ward, Tony; Skelley, Alison M; Gandhi, Khushroo; Peilun Lee, Zendra; Dosier, Christopher R; D'Silva, Joseph L; Chen, Yu; Kim, MinJung; Moynihan, James; Chen, Xiaochun; Aurich, Lee; Gulnik, Sergei; Brittain, George C; Recktenwald, Diether J; Austin, Robert H; Sturm, James C

    2016-12-01

    We previously developed a Deterministic Lateral Displacement (DLD) microfluidic method in silicon to separate cells of various sizes from blood (Davis et al., Proc Natl Acad Sci 2006;103:14779-14784; Huang et al., Science 2004;304:987-990). Here, we present the reduction-to-practice of this technology with a commercially produced, high precision plastic microfluidic chip-based device designed for automated preparation of human leukocytes (white blood cells; WBCs) for flow cytometry, without centrifugation or manual handling of samples. After a human blood sample was incubated with fluorochrome-conjugated monoclonal antibodies (mAbs), the mixture was input to a DLD microfluidic chip (microchip) where it was driven through a micropost array designed to deflect WBCs via DLD on the basis of cell size from the Input flow stream into a buffer stream, thus separating WBCs and any larger cells from smaller cells and particles and washing them simultaneously. We developed a microfluidic cell processing protocol that recovered 88% (average) of input WBCs and removed 99.985% (average) of Input erythrocytes (red blood cells) and >99% of unbound mAb in 18 min (average). Flow cytometric evaluation of the microchip Product, with no further processing, lysis or centrifugation, revealed excellent forward and side light scattering and fluorescence characteristics of immunolabeled WBCs. These results indicate that cost-effective plastic DLD microchips can speed and automate leukocyte processing for high quality flow cytometry analysis, and suggest their utility for multiple other research and clinical applications involving enrichment or depletion of common or rare cell types from blood or tissue samples. © 2016 International Society for Advancement of Cytometry.

  7. Contagion spreading on complex networks with local deterministic dynamics

    NASA Astrophysics Data System (ADS)

    Manshour, Pouya; Montakhab, Afshin

    2014-07-01

    Typically, contagion strength is modeled by a transmission rate λ, whereby all nodes in a network are treated uniformly in a mean-field approximation. However, local agents react differently to the same contagion based on their local characteristics. Following our recent work (Montakhab and Manshour, 2012 [42]), we investigate contagion spreading models with local dynamics on complex networks. We therefore quantify contagions by their quality, 0⩽α⩽1, and follow their spreading as their transmission condition (fitness) is evaluated by local agents. Instead of considering stochastic dynamics, here we consider various deterministic local rules. We find that initial spreading with exponential quality-dependent time scales is followed by a stationary state with a prevalence depending on the quality of the contagion. We also observe various interesting phenomena, for example, high prevalence without the participation of the hubs. This special feature of our "threshold rule" provides a mechanism for high prevalence spreading without the participation of "super-spreaders", in sharp contrast with many standard mechanism of spreading where hubs are believed to play the central role. On the other hand, if local nodes act as agents who stop the transmission once a threshold is reached, we find that spreading is severely hindered in a heterogeneous population while in a homogeneous one significant spreading may occur. We further decouple local characteristics from underlying topology in order to study the role of network topology in various models and find that as long as small-world effect exists, the underlying topology does not contribute to the final stationary state but only affects the initial spreading velocity.

  8. "Eztrack": A single-vehicle deterministic tracking algorithm

    SciTech Connect

    Carrano, C J

    2007-12-20

    A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.

  9. Ballistic deposition on deterministic fractals: Observation of discrete scale invariance

    NASA Astrophysics Data System (ADS)

    Horowitz, Claudio M.; Romá, Federico; Albano, Ezequiel V.

    2008-12-01

    The growth of ballistic aggregates on deterministic fractal substrates is studied by means of numerical simulations. First, we attempt the description of the evolving interface of the aggregates by applying the well-established Family-Vicsek dynamic scaling approach. Systematic deviations from that standard scaling law are observed, suggesting that significant scaling corrections have to be introduced in order to achieve a more accurate understanding of the behavior of the interface. Subsequently, we study the internal structure of the growing aggregates that can be rationalized in terms of the scaling behavior of frozen trees, i.e., structures inhibited for further growth, lying below the growing interface. It is shown that the rms height (hs) and width (ws) of the trees of size s obey power laws of the form hs∝sν∥ and ws∝sν⊥ , respectively. Also, the tree-size distribution (ns) behaves according to ns˜s-τ . Here, ν∥ and ν⊥ are the correlation length exponents in the directions parallel and perpendicular to the interface, respectively. Also, τ is a critical exponent. However, due to the interplay between the discrete scale invariance of the underlying fractal substrates and the dynamics of the growing process, all these power laws are modulated by logarithmic periodic oscillations. The fundamental scaling ratios, characteristic of these oscillations, can be linked to the (spatial) fundamental scaling ratio of the underlying fractal by means of relationships involving critical exponents. We argue that the interplay between the spatial discrete scale invariance of the fractal substrate and the dynamics of the physical process occurring in those media is a quite general phenomenon that leads to the observation of logarithmic-periodic modulations of physical observables.

  10. Deterministic and Stochastic Descriptions of Gene Expression Dynamics

    NASA Astrophysics Data System (ADS)

    Marathe, Rahul; Bierbaum, Veronika; Gomez, David; Klumpp, Stefan

    2012-09-01

    A key goal of systems biology is the predictive mathematical description of gene regulatory circuits. Different approaches are used such as deterministic and stochastic models, models that describe cell growth and division explicitly or implicitly etc. Here we consider simple systems of unregulated (constitutive) gene expression and compare different mathematical descriptions systematically to obtain insight into the errors that are introduced by various common approximations such as describing cell growth and division by an effective protein degradation term. In particular, we show that the population average of protein content of a cell exhibits a subtle dependence on the dynamics of growth and division, the specific model for volume growth and the age structure of the population. Nevertheless, the error made by models with implicit cell growth and division is quite small. Furthermore, we compare various models that are partially stochastic to investigate the impact of different sources of (intrinsic) noise. This comparison indicates that different sources of noise (protein synthesis, partitioning in cell division) contribute comparable amounts of noise if protein synthesis is not or only weakly bursty. If protein synthesis is very bursty, the burstiness is the dominant noise source, independent of other details of the model. Finally, we discuss two sources of extrinsic noise: cell-to-cell variations in protein content due to cells being at different stages in the division cycles, which we show to be small (for the protein concentration and, surprisingly, also for the protein copy number per cell) and fluctuations in the growth rate, which can have a significant impact.

  11. Parkinson's disease classification using gait analysis via deterministic learning.

    PubMed

    Zeng, Wei; Liu, Fenglin; Wang, Qinghui; Wang, Ying; Ma, Limin; Zhang, Yu

    2016-10-28

    Gait analysis plays an important role in maintaining the well-being of human mobility and health care, and is a valuable tool for obtaining quantitative information on motor deficits in Parkinson's disease (PD). In this paper, we propose a method to classify (diagnose) patients with PD and healthy control subjects using gait analysis via deterministic learning theory. The classification approach consists of two phases: a training phase and a classification phase. In the training phase, gait characteristics represented by the gait dynamics are derived from the vertical ground reaction forces under the usual and self-selected paces of the subjects. The gait dynamics underlying gait patterns of healthy controls and PD patients are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. The gait patterns of healthy controls and PD patients constitute a training set. In the classification phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern of a certain PD patient to be classified (diagnosed), a set of classification errors are generated. The average L1 norms of the errors are taken as the classification measure between the dynamics of the training gait patterns and the dynamics of the test PD gait pattern according to the smallest error principle. When the gait patterns of 93 PD patients and 73 healthy controls are classified with five-fold cross-validation method, the accuracy, sensitivity and specificity of the results are 96.39%, 96.77% and 95.89%, respectively. Based on the results, it may be claimed that the features and the classifiers used in the present study could effectively separate the gait patterns between the groups of PD patients and healthy

  12. Epidemiology of child deaths due to drowning in Matlab, Bangladesh.

    PubMed

    Ahmed, M K; Rahman, M; van Ginneken, J

    1999-04-01

    A study based upon verbal autopsies conducted in a sample of children who died in Bangladesh during 1989-92 found that approximately 21% of deaths among children aged 1-4 years were due to drowning. Such mortality may be expected in Bangladesh, for its villages are usually surrounded and intersected by canals and rivers, and there are many ponds surrounding households which are used for bathing and washing year round. Children also play in these bodies of water, and most villages are inundated by the monsoon for several months each year. Drawn from the Matlab Demographic Surveillance System (DSS) operated by the International Center for Diarrheal Disease Research, Bangladesh (ICDDR,B), data are presented on the mortality of children aged 1-4 years due to drowning in Matlab thana, a rural area of Bangladesh, during 1983-95. 10-25% of child deaths during 1983-95 were due to drowning. The absolute risk of dying from drowning remained almost the same over the study period, but the proportion of drownings to all causes of death increased. Drowning is especially prevalent during the second year of life. Mother's age and parity significantly affect drowning, with the risk of dying from drowning increasing with mother's age and far more sharply with the number of living children in the family. Maternal education and dwelling space had no influence upon the risk of drowning. A major portion of these deaths could be averted if parents and other close relatives paid more attention to child safety.

  13. A Matlab-based finite-difference solver for the Poisson problem with mixed Dirichlet-Neumann boundary conditions

    NASA Astrophysics Data System (ADS)

    Reimer, Ashton S.; Cheviakov, Alexei F.

    2013-03-01

    A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.

  14. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    SciTech Connect

    Gong Chunye; Liu Jie; Chi Lihua; Huang Haowei; Fang Jingyue; Gong Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates (S{sub n}) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  15. ShareSync: A Solution for Deterministic Data Sharing over Ethernet

    NASA Technical Reports Server (NTRS)

    Dunn, Daniel J., II; Koons, William A.; Kennedy, Richard D.; Davis, Philip A.

    2007-01-01

    As part of upgrading the Contact Dynamics Simulation Laboratory (CDSL) at the NASA Marshall Space Flight Center (MSFC), a simple, cost effective method was needed to communicate data among the networked simulation machines and I/O controllers used to run the facility. To fill this need and similar applicable situations, a generic protocol was developed, called ShareSync. ShareSync is a lightweight, real-time, publish-subscribe Ethernet protocol for simple and deterministic data sharing across diverse machines and operating systems. ShareSync provides a simple Application Programming Interface (API) for simulation programmers to incorporate into their code. The protocol is compatible with virtually all Ethernet-capable machines, is flexible enough to support a variety of applications, is fast enough to provide soft real-time determinism, and is a low-cost resource for distributed simulation development, deployment, and maintenance. The first design cycle iteration of ShareSync has been completed, and the protocol has undergone several testing procedures including endurance and benchmarking tests and approaches the 2001ts data synchronization design goal for the CDSL.

  16. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  17. A Matlab/Simulink-Based Interactive Module for Servo Systems Learning

    ERIC Educational Resources Information Center

    Aliane, N.

    2010-01-01

    This paper presents an interactive module for learning both the fundamental and practical issues of servo systems. This module, developed using Simulink in conjunction with the Matlab graphical user interface (Matlab-GUI) tool, is used to supplement conventional lectures in control engineering and robotics subjects. First, the paper introduces the…

  18. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-10-26

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks.

  19. A MATLAB-Aided Method for Teaching Calculus-Based Business Mathematics

    ERIC Educational Resources Information Center

    Liang, Jiajuan; Pan, William S. Y.

    2009-01-01

    MATLAB is a powerful package for numerical computation. MATLAB contains a rich pool of mathematical functions and provides flexible plotting functions for illustrating mathematical solutions. The course of calculus-based business mathematics consists of two major topics: 1) derivative and its applications in business; and 2) integration and its…

  20. Neo-Deterministic and Probabilistic Seismic Hazard Assessments: a Comparative Analysis

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Magrin, Andrea; Nekrasova, Anastasia; Kossobokov, Vladimir; Panza, Giuliano F.

    2016-04-01

    Objective testing is the key issue towards any reliable seismic hazard assessment (SHA). Different earthquake hazard maps must demonstrate their capability in anticipating ground shaking from future strong earthquakes before an appropriate use for different purposes - such as engineering design, insurance, and emergency management. Quantitative assessment of maps performances is an essential step also in scientific process of their revision and possible improvement. Cross-checking of probabilistic models with available observations and independent physics based models is recognized as major validation procedure. The existing maps from the classical probabilistic seismic hazard analysis (PSHA), as well as those from the neo-deterministic analysis (NDSHA), which have been already developed for several regions worldwide (including Italy, India and North Africa), are considered to exemplify the possibilities of the cross-comparative analysis in spotting out limits and advantages of different methods. Where the data permit, a comparative analysis versus the documented seismic activity observed in reality is carried out, showing how available observations about past earthquakes can contribute to assess performances of the different methods. Neo-deterministic refers to a scenario-based approach, which allows for consideration of a wide range of possible earthquake sources as the starting point for scenarios constructed via full waveforms modeling. The method does not make use of empirical attenuation models (i.e. Ground Motion Prediction Equations, GMPE) and naturally supplies realistic time series of ground shaking (i.e. complete synthetic seismograms), readily applicable to complete engineering analysis and other mitigation actions. The standard NDSHA maps provide reliable envelope estimates of maximum seismic ground motion from a wide set of possible scenario earthquakes, including the largest deterministically or historically defined credible earthquake. In addition

  1. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  2. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  3. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    SciTech Connect

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.

  4. Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region

    SciTech Connect

    Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.

    2008-07-08

    A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.

  5. A deterministic and statistical energy analysis of tyre cavity resonance noise

    NASA Astrophysics Data System (ADS)

    Mohamed, Zamri; Wang, Xu

    2016-03-01

    Tyre cavity resonance was studied using a combination of deterministic analysis and statistical energy analysis where its deterministic part was implemented using the impedance compact mobility matrix method and its statistical part was done by the statistical energy analysis method. While the impedance compact mobility matrix method can offer a deterministic solution to the cavity pressure response and the compliant wall vibration velocity response in the low frequency range, the statistical energy analysis method can offer a statistical solution of the responses in the high frequency range. In the mid frequency range, a combination of the statistical energy analysis and deterministic analysis methods can identify system coupling characteristics. Both methods have been compared to those from commercial softwares in order to validate the results. The combined analysis result has been verified by the measurement result from a tyre-cavity physical model. The analysis method developed in this study can be applied to other similar toroidal shape structural-acoustic systems.

  6. DETERMINISTIC PRODUCTION PLANNING WITH CONCAVE COSTS AND CAPACITY CONSTRAINTS.

    DTIC Science & Technology

    INDUSTRIAL PRODUCTION , MANAGEMENT PLANNING AND CONTROL), (*PRODUCTION CONTROL, DYNAMIC PROGRAMMING), INVENTORY ANALYSIS, SCHEDULING, COST EFFECTIVENESS, STORAGE, MANPOWER, OPTIMIZATION, MATHEMATICAL MODELS, ALGORITHMS

  7. An efficient Matlab script to calculate heterogeneous anisotropically elastic wave propagation in three dimensions

    USGS Publications Warehouse

    Boyd, O.S.

    2006-01-01

    We have created a second-order finite-difference solution to the anisotropic elastic wave equation in three dimensions and implemented the solution as an efficient Matlab script. This program allows the user to generate synthetic seismograms for three-dimensional anisotropic earth structure. The code was written for teleseismic wave propagation in the 1-0.1 Hz frequency range but is of general utility and can be used at all scales of space and time. This program was created to help distinguish among various types of lithospheric structure given the uneven distribution of sources and receivers commonly utilized in passive source seismology. Several successful implementations have resulted in a better appreciation for subduction zone structure, the fate of a transform fault with depth, lithospheric delamination, and the effects of wavefield focusing and defocusing on attenuation. Companion scripts are provided which help the user prepare input to the finite-difference solution. Boundary conditions including specification of the initial wavefield, absorption and two types of reflection are available. ?? 2005 Elsevier Ltd. All rights reserved.

  8. Implementation of Gy-Eq for deterministic effects limitation in shield design

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Kim, Myung-Hee Y.; De Angelis, Giovanni; Cucinotta, Francis A.; Yoshizawa, Nobuaki; Badavi, Francis F.

    2002-01-01

    The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities.

  9. On the application of deterministic optimization methods to stochastic control problems

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.; Athans, M.

    1974-01-01

    A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.

  10. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    SciTech Connect

    Rice, A. F.; Roussin, R. W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  11. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    SciTech Connect

    Rice, A.F.; Roussin, R.W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  12. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  13. Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects

    NASA Astrophysics Data System (ADS)

    Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume

    The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in

  14. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods

  15. Deterministic Modeling of the High Temperature Test Reactor

    SciTech Connect

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  16. Application of MATLAB and Python optimizers to two case studies involving groundwater flow and contaminant transport modeling

    NASA Astrophysics Data System (ADS)

    Matott, L. Shawn; Leung, Kenny; Sim, Junyoung

    2011-11-01

    One approach for utilizing geoscience models for management or policy analysis is via a simulation-based optimization framework—where an underlying model is linked with an optimization search algorithm. In this regard, MATLAB and Python are high-level programming languages that implement numerous optimization routines, including gradient-based, heuristic, and direct-search optimizers. The ever-expanding number of available algorithms makes it challenging for practitioners to identify optimizers that deliver good performance when applied to problems of interest. Thus, the primary contribution of this paper is to present a series of numerical experiments that investigated the performance of various MATLAB and Python optimizers. The experiments considered two simulation-based optimization case studies involving groundwater flow and contaminant transport. One case study examined the design of a pump-and-treat system for groundwater remediation, while the other considered least-squares calibration of a model of strontium (Sr) transport. Using these case studies, the performance of 12 different MATLAB and Python optimizers was compared. Overall, the Hooke-Jeeves direct search algorithm yielded the best performance in terms of identifying least-cost and best-fit solutions to the design and calibration problems, respectively. The IFFCO (implicit filtering for constrained optimization) direct search algorithm and the dynamically dimensioned search (DDS) heuristic algorithm also consistently yielded good performance and were up to 80% more efficient than Hooke-Jeeves when applied to the pump-and-treat problem. These results provide empirical evidence that, relative to gradient- and population-based alternatives, direct search algorithms and heuristic variants, such as DDS, are good choices for application to simulation-based optimization problems involving groundwater management.

  17. tweezercalib 2.0: Faster version of MatLab package for precise calibration of optical tweezers

    NASA Astrophysics Data System (ADS)

    Hansen, Poul Martin; Tolić-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine

    2006-03-01

    We present a vectorized version of the MatLab (MathWorks Inc.) package tweezercalib for calibration of optical tweezers with precision. The calibration is based on the power spectrum of the Brownian motion of a dielectric bead trapped in the tweezers. Precision is achieved by accounting for a number of factors that affect this power spectrum, as described in vs. 1 of the package [I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Matlab program for precision calibration of optical tweezers, Comput. Phys. Comm. 159 (2004) 225-240]. The graphical user interface allows the user to include or leave out each of these factors. Several "health tests" are applied to the experimental data during calibration, and test results are displayed graphically. Thus, the user can easily see whether the data comply with the theory used for their interpretation. Final calibration results are given with statistical errors and covariance matrix. New version program summaryTitle of program: tweezercalib Catalogue identifier: ADTV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference in CPC to previous version: I.M. Tolić-Nørrelykke, K. Berg-Sørensen, H. Flyvbjerg, Comput. Phys. Comm. 159 (2004) 225 Catalogue identifier of previous version: ADTV Does the new version supersede the original program: Yes Computer for which the program is designed and others on which it has been tested: General computer running MatLab (Mathworks Inc.) Operating systems under with the program has been tested: Windows2000, Windows-XP, Linux Programming language used: MatLab (Mathworks Inc.), standard license Memory required to execute with typical data: Of order four times the size of the data file High speed storage required: none No. of lines in distributed program, including test data, etc.: 135 989 No. of bytes in distributed program, including test data, etc.: 1 527 611 Distribution

  18. MATLAB implementation of a dynamic clamp with bandwidth >125 KHz capable of generating INa at 37°C

    PubMed Central

    Clausen, Chris; Valiunas, Virginijus; Brink, Peter R.; Cohen, Ira S.

    2012-01-01

    We describe the construction of a dynamic clamp with bandwidth >125 KHz that utilizes a high performance, yet low cost, standard home/office PC interfaced with a high-speed (16 bit) data acquisition module. High bandwidth is achieved by exploiting recently available software advances (code-generation technology, optimized real-time kernel). Dynamic-clamp programs are constructed using Simulink, a visual programming language. Blocks for computation of membrane currents are written in the high-level matlab language; no programming in C is required. The instrument can be used in single- or dual-cell configurations, with the capability to modify programs while experiments are in progress. We describe an algorithm for computing the fast transient Na+ current (INa) in real time, and test its accuracy and stability using rate constants appropriate for 37°C. We then construct a program capable of supplying three currents to a cell preparation: INa, the hyperpolarizing-activated inward pacemaker current (If), and an inward-rectifier K+ current (IK1). The program corrects for the IR drop due to electrode current flow, and also records all voltages and currents. We tested this program on dual patch-clamped HEK293 cells where the dynamic clamp controls a current-clamp amplifier and a voltage-clamp amplifier controls membrane potential, and current-clamped HEK293 cells where the dynamic clamp produces spontaneous pacing behavior exhibiting Na+ spikes in otherwise passive cells. PMID:23224681

  19. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    NASA Astrophysics Data System (ADS)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  20. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    NASA Astrophysics Data System (ADS)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  1. MATLAB Stability and Control Toolbox Trim and Static Stability Module

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis

    2012-01-01

    MATLAB Stability and Control Toolbox (MASCOT) utilizes geometric, aerodynamic, and inertial inputs to calculate air vehicle stability in a variety of critical flight conditions. The code is based on fundamental, non-linear equations of motion and is able to translate results into a qualitative, graphical scale useful to the non-expert. MASCOT was created to provide the conceptual aircraft designer accurate predictions of air vehicle stability and control characteristics. The code takes as input mass property data in the form of an inertia tensor, aerodynamic loading data, and propulsion (i.e. thrust) loading data. Using fundamental nonlinear equations of motion, MASCOT then calculates vehicle trim and static stability data for the desired flight condition(s). Available flight conditions include six horizontal and six landing rotation conditions with varying options for engine out, crosswind, and sideslip, plus three take-off rotation conditions. Results are displayed through a unique graphical interface developed to provide the non-stability and control expert conceptual design engineer a qualitative scale indicating whether the vehicle has acceptable, marginal, or unacceptable static stability characteristics. If desired, the user can also examine the detailed, quantitative results.

  2. Fission gas bubble identification using MATLAB's image processing toolbox

    DOE PAGES

    Collette, R.; King, J.; Keiser, Jr., D.; ...

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less

  3. Efficient MATLAB computations with sparse and factored tensors.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson (Sandia National Lab, Livermore, CA)

    2006-12-01

    In this paper, the term tensor refers simply to a multidimensional or N-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: a Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.

  4. Fission gas bubble identification using MATLAB's image processing toolbox

    SciTech Connect

    Collette, R.; King, J.; Keiser, Jr., D.; Miller, B.; Madden, J.; Schulthess, J.

    2016-06-08

    Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.

  5. Finger-vein image separation algorithms and realization with MATLAB

    NASA Astrophysics Data System (ADS)

    Gao, Xiaoyan; Ma, Junshan; Wu, Jiajie

    2010-10-01

    According to the characteristics of the finger-vein image, we adopted a series of methods to enhance the contrast of the image in order to separate the finger-vein areas from the background areas, and made prepare for the subsequent research such as feature extraction and recognition processing . The method consists of three steps: denoising, contrast enhancement and image binarization. In denoising, considering the relationship between gray levels in the adjacent areas of the finger-vein image, we adopted the Gradient Inverse Weighted Smoothing method. In contrast enhancement, we improved the conventional High Frequency Stress Filtering method and adopted a method which combined the traditional High Frequency Stress Filtering algorithm together with the Histogram Equalization. With this method, the contrast of the finger-vein area and the background area has been enhanced significantly. During the binarization process, after taking the differences of the gray levels between the different areas of the finger-vein image into consideration, we proposed a method which combined the binarization by dividing the image into several segments and the Morphological Image Processing means. Our experiment results show that after a series of processing mentioned above by using MATLAB, the finger-vein areas can be separated from the background areas obviously. We can get a vivid figure of the finger-vein which provided some references for the following research such as finger-vein image feature extraction, matching and identification.

  6. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent

  7. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    SciTech Connect

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pair production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang

  8. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    NASA Astrophysics Data System (ADS)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  9. SPIDYAN, a MATLAB library for simulating pulse EPR experiments with arbitrary waveform excitation.

    PubMed

    Pribitzer, Stephan; Doll, Andrin; Jeschke, Gunnar

    2016-02-01

    Frequency-swept chirp pulses, created with arbitrary waveform generators (AWGs), can achieve inversion over a range of several hundreds of MHz. Such passage pulses provide defined flip angles and increase sensitivity. The fact that spectra are not excited at once, but single transitions are passed one after another, can cause new effects in established pulse EPR sequences. We developed a MATLAB library for simulation of pulse EPR, which is especially suited for modeling spin dynamics in ultra-wideband (UWB) EPR experiments, but can also be used for other experiments and NMR. At present the command line controlled SPin DYnamics ANalysis (SPIDYAN) package supports one-spin and two-spin systems with arbitrary spin quantum numbers. By providing the program with appropriate spin operators and Hamiltonian matrices any spin system is accessible, with limits set only by available memory and computation time. Any pulse sequence using rectangular and linearly or variable-rate frequency-swept chirp pulses, including phase cycling can be quickly created. To keep track of spin evolution the user can choose from a vast variety of detection operators, including transition selective operators. If relaxation effects can be neglected, the program solves the Liouville-von Neumann equation and propagates spin density matrices. In the other cases SPIDYAN uses the quantum mechanical master equation and Liouvillians for propagation. In order to consider the resonator response function, which on the scale of UWB excitation limits bandwidth, the program includes a simple RLC circuit model. Another subroutine can compute waveforms that, for a given resonator, maintain a constant critical adiabaticity factor over the excitation band. Computational efficiency is enhanced by precomputing propagator lookup tables for the whole set of AWG output levels. The features of the software library are discussed and demonstrated with spin-echo and population transfer simulations.

  10. SPIDYAN, a MATLAB library for simulating pulse EPR experiments with arbitrary waveform excitation

    NASA Astrophysics Data System (ADS)

    Pribitzer, Stephan; Doll, Andrin; Jeschke, Gunnar

    2016-02-01

    Frequency-swept chirp pulses, created with arbitrary waveform generators (AWGs), can achieve inversion over a range of several hundreds of MHz. Such passage pulses provide defined flip angles and increase sensitivity. The fact that spectra are not excited at once, but single transitions are passed one after another, can cause new effects in established pulse EPR sequences. We developed a MATLAB library for simulation of pulse EPR, which is especially suited for modeling spin dynamics in ultra-wideband (UWB) EPR experiments, but can also be used for other experiments and NMR. At present the command line controlled SPin DYnamics ANalysis (SPIDYAN) package supports one-spin and two-spin systems with arbitrary spin quantum numbers. By providing the program with appropriate spin operators and Hamiltonian matrices any spin system is accessible, with limits set only by available memory and computation time. Any pulse sequence using rectangular and linearly or variable-rate frequency-swept chirp pulses, including phase cycling can be quickly created. To keep track of spin evolution the user can choose from a vast variety of detection operators, including transition selective operators. If relaxation effects can be neglected, the program solves the Liouville-von Neumann equation and propagates spin density matrices. In the other cases SPIDYAN uses the quantum mechanical master equation and Liouvillians for propagation. In order to consider the resonator response function, which on the scale of UWB excitation limits bandwidth, the program includes a simple RLC circuit model. Another subroutine can compute waveforms that, for a given resonator, maintain a constant critical adiabaticity factor over the excitation band. Computational efficiency is enhanced by precomputing propagator lookup tables for the whole set of AWG output levels. The features of the software library are discussed and demonstrated with spin-echo and population transfer simulations.

  11. IB2d: a Python and MATLAB implementation of the immersed boundary method.

    PubMed

    Battista, Nicholas A; Strickland, W Christopher; Miller, Laura A

    2017-03-29

    The development of fluid-structure interaction (FSI) software involves trade-offs between ease of use, generality, performance, and cost. Typically there are large learning curves when using low-level software to model the interaction of an elastic structure immersed in a uniform density fluid. Many existing codes are not publicly available, and the commercial software that exists usually requires expensive licenses and may not be as robust or allow the necessary flexibility that in house codes can provide. We present an open source immersed boundary software package, IB2d, with full implementations in both MATLAB and Python, that is capable of running a vast range of biomechanics models and is accessible to scientists who have experience in high-level programming environments. IB2d contains multiple options for constructing material properties of the fiber structure, as well as the advection-diffusion of a chemical gradient, muscle mechanics models, and artificial forcing to drive boundaries with a preferred motion.

  12. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  13. Finite-difference schemes for reaction-diffusion equations modeling predator-prey interactions in MATLAB.

    PubMed

    Garvie, Marcus R

    2007-04-01

    We present two finite-difference algorithms for studying the dynamics of spatially extended predator-prey interactions with the Holling type II functional response and logistic growth of the prey. The algorithms are stable and convergent provided the time step is below a (non-restrictive) critical value. This is advantageous as it is well-known that the dynamics of approximations of differential equations (DEs) can differ significantly from that of the underlying DEs themselves. This is particularly important for the spatially extended systems that are studied in this paper as they display a wide spectrum of ecologically relevant behavior, including chaos. Furthermore, there are implementational advantages of the methods. For example, due to the structure of the resulting linear systems, standard direct, and iterative solvers are guaranteed to converge. We also present the results of numerical experiments in one and two space dimensions and illustrate the simplicity of the numerical methods with short programs MATLAB: . Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/, to investigate the key dynamical properties of spatially extended predator-prey interactions.

  14. Development of the Borehole 2-D Seismic Tomography Software Using MATLAB

    NASA Astrophysics Data System (ADS)

    Nugraha, A. D.; Syahputra, A.; Fatkhan, F.; Sule, R.; Hendriyana, A.

    2011-12-01

    We developed 2-D borehole seismic tomography software that we called "EARTHMAX-2D TOMOGRAPHY" to image subsurface physical properties including P-wave and S-wave velocities between two boreholes. We used Graphic User Interface (GUI) facilities of MATLAB programming language to create the software. In this software, we used travel time of seismic waves from source to receiver by using pseudo bending ray tracing method as input for tomography inversion. We can also set up a model parameterization, initial velocity model, ray tracing processes, conduct borehole seismic tomography inversion, and finally visualize the inversion results. The LSQR method was applied to solve of tomography inversion solution. We provided the Checkerboard Test Resolution (CTR) to evaluate the model resolution of the tomography inversion. As validation of this developed software, we tested it for geotechnical purposes. We then conducted data acquisition in the "ITB X-field" that is located on ITB campus. We used two boreholes that have a depth of 39 meters. Seismic wave sources were generated by impulse generator and sparker and then they were recorded by borehole hydrophone string type 3. Later on, we analyzed and picked seismic arrival time as input for tomography inversion. As results, we can image the estimated weathering layer, sediment layer, and basement rock in the field depicted by seismic wave structures. More detailed information about the developed software will be presented. Keywords: borehole, tomography, earthmax-2D, inversion

  15. Improve Data Mining and Knowledge Discovery through the use of MatLab

    NASA Technical Reports Server (NTRS)

    Shaykahian, Gholan Ali; Martin, Dawn Elliott; Beil, Robert

    2011-01-01

    Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(TradeMark)(MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and

  16. Improve Data Mining and Knowledge Discovery Through the Use of MatLab

    NASA Technical Reports Server (NTRS)

    Shaykhian, Gholam Ali; Martin, Dawn (Elliott); Beil, Robert

    2011-01-01

    Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(R) (MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its

  17. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  18. Deterministic vs. probabilistic analyses to identify sensitive parameters in dose assessment using RESRAD.

    PubMed

    Kamboj, Sunita; Cheng, Jing-Jy; Yu, Charley

    2005-05-01

    The dose assessments for sites containing residual radioactivity usually involve the use of computer models that employ input parameters describing the physical conditions of the contaminated and surrounding media and the living and consumption patterns of the receptors in analyzing potential doses to the receptors. The precision of the dose results depends on the precision of the input parameter values. The identification of sensitive parameters that have great influence on the dose results would help set priorities in research and information gathering for parameter values so that a more precise dose assessment can be conducted. Two methods of identifying site-specific sensitive parameters, deterministic and probabilistic, were compared by applying them to the RESRAD computer code for analyzing radiation exposure for a residential farmer scenario. The deterministic method has difficulty in evaluating the effect of simultaneous changes in a large number of input parameters on the model output results. The probabilistic method easily identified the most sensitive parameters, but the sensitivity measure of other parameters was obscured. The choice of sensitivity analysis method would depend on the availability of site-specific data. Generally speaking, the deterministic method would identify the same set of sensitive parameters as the probabilistic method when 1) the baseline values used in the deterministic method were selected near the mean or median value of each parameter and 2) the selected range of parameter values used in the deterministic method was wide enough to cover the 5th to 95th percentile values from the distribution of that parameter.

  19. Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra

    SciTech Connect

    Armstrong, Derek Elswick

    2016-07-19

    This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performing a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.

  20. a Matlab Geodetic Software for Processing Airborne LIDAR Bathymetry Data

    NASA Astrophysics Data System (ADS)

    Pepe, M.; Prezioso, G.

    2015-04-01

    The ability to build three-dimensional models through technologies based on satellite navigation systems GNSS and the continuous development of new sensors, as Airborne Laser Scanning Hydrography (ALH), data acquisition methods and 3D multi-resolution representations, have contributed significantly to the digital 3D documentation, mapping, preservation and representation of landscapes and heritage as well as to the growth of research in this fields. However, GNSS systems led to the use of the ellipsoidal height; to transform this height in orthometric is necessary to know a geoid undulation model. The latest and most accurate global geoid undulation model, available worldwide, is EGM2008 which has been publicly released by the U.S. National Geospatial-Intelligence Agency (NGA) EGM Development Team. Therefore, given the availability and accuracy of this geoid model, we can use it in geomatics applications that require the conversion of heights. Using this model, to correct the elevation of a point does not coincide with any node must interpolate elevation information of adjacent nodes. The purpose of this paper is produce a Matlab® geodetic software for processing airborne LIDAR bathymetry data. In particular we want to focus on the point clouds in ASPRS LAS format and convert the ellipsoidal height in orthometric. The algorithm, valid on the whole globe and operative for all UTM zones, allows the conversion of ellipsoidal heights using the EGM2008 model. Of this model we analyse the slopes which occur, in some critical areas, between the nodes of the undulations grid; we will focus our attention on the marine areas verifying the impact that the slopes have in the calculation of the orthometric height and, consequently, in the accuracy of the in the 3-D point clouds. This experiment will be carried out by analysing a LAS APRS file containing topographic and bathymetric data collected with LIDAR systems along the coasts of Oregon and Washington (USA).

  1. When South meets South. How Navrongo learnt from Matlab.

    PubMed

    Yeboah-afari, A

    1997-01-01

    This article describes the community-oriented reproductive health system that is being implemented in the remote area of Navrongo in Ghana. The system was modeled on the lessons learned from the Matlab experience in Bangladesh. The Navrongo Community and Family Planning Research Project aimed to find the best way to deliver health services and family planning. The project used residential community nurses and village health volunteers, who sought to overcome the perception by some that contraception was taboo and to determine how best to integrate family planning into existing services. 3 nurses tested 3 strategies in 3 different communities. The best option was for the nurse to live in a separate detached hut where privacy could be assured. Communal labor was used to build the huts for the nurses. Each nurse was supplied with a motorcycle, drugs, and contraceptives. Clients paid a small fee for services and supplies. The nurses traveled between compounds supplying services and health education. Each compound served an extended family in a set of huts and a total population ranging from 2 to 100. Nurses set out in the early morning hours in order to find women at home before the women left for their farms or the market. Nurses were on 24-hour call when not traveling to compounds. Nurses spoke the local language and received visits from clients who visited the nurses's homes. Nurses found that many women held misconceptions about contraception. The women feared infertility, and the men feared unfaithfulness. Since the project began, the number of acceptors increased from 2 to 217. Most preferred periodic injection. Child immunization rose, and child survival improved. Critics question whether donor support will be sustained. The Navrongo project is being replicated in other Ghana locations and other African countries.

  2. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis.

  3. Matlab based Toolkits used to Interface with Optical Design Software for NASA's James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Howard, Joseph

    2007-01-01

    The viewgraph presentation provides an introduction to the James Webb Space Telescope (JWST). The first part provides a brief overview of Matlab toolkits including CodeV, OSLO, and Zemax Toolkits. The toolkit overview examines purpose, layout, how Matlab gets data from CodeV, function layout, and using cvHELP. The second part provides examples of use with JWST, including wavefront sensitivities and alignment simulations.

  4. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  5. Deterministic creation, pinning, and manipulation of quantized vortices in a Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.

    2016-02-01

    We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.

  6. A deterministic annealing algorithm for a combinatorial optimization problem using replicator equations

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Kazuo; Nishiyama, Takehiro; Tsujita, Katsuyoshi

    2001-02-01

    We have proposed an optimization method for a combinatorial optimization problem using replicator equations. To improve the solution further, a deterministic annealing algorithm may be applied. During the annealing process, bifurcations of equilibrium solutions will occur and affect the performance of the deterministic annealing algorithm. In this paper, the bifurcation structure of the proposed model is analyzed in detail. It is shown that only pitchfork bifurcations occur in the annealing process, and the solution obtained by the annealing is the branch uniquely connected with the uniform solution. It is also shown experimentally that in many cases, this solution corresponds to a good approximate solution of the optimization problem. Based on the results, a deterministic annealing algorithm is proposed and applied to the quadratic assignment problem to verify its performance.

  7. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  8. Competitive exclusion, beta diversity, and deterministic vs. stochastic drivers of community assembly.

    PubMed

    Segre, Hila; Ron, Ronen; De Malach, Niv; Henkin, Zalmen; Mandel, Micha; Kadmon, Ronen

    2014-11-01

    Species diversity has two components - number of species and spatial turnover in species composition (beta-diversity). Using a field experiment focusing on a system of Mediterranean grasslands, we show that interspecific competition may influence the two components in the same direction or in opposite directions, depending on whether competitive exclusions are deterministic or stochastic. Deterministic exclusions reduce both patch-scale richness and beta-diversity, thereby homogenising the community. Stochastic extinctions reduce richness at the patch scale, but increase the differences in species composition among patches. These results indicate that studies of competitive effects on beta diversity may help to distinguish between deterministic and stochastic components of competitive exclusion. Such distinction is crucial for understanding the causal relationship between competition and species diversity, one of the oldest and most fundamental questions in ecology.

  9. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  10. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    SciTech Connect

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.

  11. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  12. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  13. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  14. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  15. Development of an improved MATLAB GUI for the prediction of coefficients of restitution, and integration into LMS.

    SciTech Connect

    Baca, Renee Nicole; Congdon, Michael L.; Brake, Matthew Robert

    2014-07-01

    In 2012, a Matlab GUI for the prediction of the coefficient of restitution was developed in order to enable the formulation of more accurate Finite Element Analysis (FEA) models of components. This report details the development of a new Rebound Dynamics GUI, and how it differs from the previously developed program. The new GUI includes several new features, such as source and citation documentation for the material database, as well as a multiple materials impact modeler for use with LMS Virtual.Lab Motion (LMS VLM), and a rigid body dynamics modeling software. The Rebound Dynamics GUI has been designed to work with LMS VLM to enable straightforward incorporation of velocity-dependent coefficients of restitution in rigid body dynamics simulations.

  16. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  17. Accurate calculation and Matlab based fast realization of merit function's Hesse matrix for the design of multilayer optical coating

    NASA Astrophysics Data System (ADS)

    Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong

    2009-09-01

    To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.

  18. System Simulation of Nuclear Power Plant by Coupling RELAP5 and Matlab/Simulink

    SciTech Connect

    Meng Lin; Dong Hou; Zhihong Xu; Yanhua Yang; Ronghua Zhang

    2006-07-01

    Since RELAP5 code has general and advanced features in thermal-hydraulic computation, it has been widely used in transient and accident safety analysis, experiment planning analysis, and system simulation, etc. So we wish to design, analyze, verify a new Instrumentation And Control (I and C) system of Nuclear Power Plant (NPP) based on the best-estimated code, and even develop our engineering simulator. But because of limited function of simulating control and protection system in RELAP5, it is necessary to expand the function for high efficient, accurate, flexible design and simulation of I and C system. Matlab/Simulink, a scientific computation software, just can compensate the limitation, which is a powerful tool in research and simulation of plant process control. The software is selected as I and C part to be coupled with RELAP5 code to realize system simulation of NPPs. There are two key techniques to be solved. One is the dynamic data exchange, by which Matlab/Simulink receives plant parameters and returns control results. Database is used to communicate the two codes. Accordingly, Dynamic Link Library (DLL) is applied to link database in RELAP5, while DLL and S-Function is applied in Matlab/Simulink. The other problem is synchronization between the two codes for ensuring consistency in global simulation time. Because Matlab/Simulink always computes faster than RELAP5, the simulation time is sent by RELAP5 and received by Matlab/Simulink. A time control subroutine is added into the simulation procedure of Matlab/Simulink to control its simulation advancement. Through these ways, Matlab/Simulink is dynamically coupled with RELAP5. Thus, in Matlab/Simulink, we can freely design control and protection logic of NPPs and test it with best-estimated plant model feedback. A test will be shown to illuminate that results of coupling calculation are nearly the same with one of single RELAP5 with control logic. In practice, a real Pressurized Water Reactor (PWR) is

  19. A Simulation Program for Dynamic Infrared (IR) Spectra

    ERIC Educational Resources Information Center

    Zoerb, Matthew C.; Harris, Charles B.

    2013-01-01

    A free program for the simulation of dynamic infrared (IR) spectra is presented. The program simulates the spectrum of two exchanging IR peaks based on simple input parameters. Larger systems can be simulated with minor modifications. The program is available as an executable program for PCs or can be run in MATLAB on any operating system. Source…

  20. Introducing Earth Sciences Students to Modeling Using MATLAB Exercises

    NASA Astrophysics Data System (ADS)

    Anderson, R. S.

    2003-12-01

    While we subject our students to math and physics and chemistry courses to complement their geological studies, we rarely allow them to experience the joys of modeling earth systems. Given the degree to which modern earth sciences relies upon models of complex systems, it seems appropriate to allow our students to develop some experience with this activity. In addition, as modeling is an unforgivingly logical exercise, it demands the student absorb the fundamental concepts, the assumptions behind them, and the means of constraining the relevant parameters in a problem. These concepts commonly include conservation of some quantity, the fluxes of that quantity, and careful prescription of the boundary and initial conditions. I have used MATLAB as an entrance to this world, and will illustrate the products of the exercises we have worked. This software is platform-independent, and has a wonderful graphics package (including movies) that is embedded intimately as one-to-several line calls. The exercises should follow a progression from simple to complex, and serve to introduce the many discrete tasks within modeling. I advocate full immersion in the first exercise. Example exercises include: growth of spatter cones (summation of parabolic trajectories of lava bombs); response of thermal profiles in the earth to varying surface temperature (thermal conduction); hillslope or fault scarp evolution (topographic diffusion); growth and subsidence of volcanoes (flexure); and coral growth on a subsiding platform in the face of sealevel fluctuations (coral biology and light extinction). These exercises can be motivated by reading a piece in the classical or modern literature that either describes a model, or better yet serves to describe the system well, but does not present a model. I have found that the generation of movies from even the early simulation exercises serves as an additional motivator for students. We discuss the models in each class meeting, and learn that there

  1. A deterministic aggregate production planning model considering quality of products

    NASA Astrophysics Data System (ADS)

    Madadi, Najmeh; Yew Wong, Kuan

    2013-06-01

    Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.

  2. Calculation of photon pulse height distribution using deterministic and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Akhavan, Azadeh; Vosoughi, Naser

    2015-12-01

    Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.

  3. Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model

    NASA Astrophysics Data System (ADS)

    Novkaniza, F.; Ivana, Aldila, D.

    2016-04-01

    Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.

  4. Breakdown of deterministic lateral displacement efficiency for non-dilute suspensions: A numerical study.

    PubMed

    Vernekar, R; Krüger, T

    2015-09-01

    We investigate the effect of particle volume fraction on the efficiency of deterministic lateral displacement (DLD) devices. DLD is a popular passive sorting technique for microfluidic applications. Yet, it has been designed for treating dilute suspensions, and its efficiency for denser samples is not well known. We perform 3D simulations based on the immersed-boundary, lattice-Boltzmann and finite-element methods to model the flow of red blood cells (RBCs) in different DLD devices. We quantify the DLD efficiency in terms of appropriate "failure" probabilities and RBC counts in designated device outlets. Our main result is that the displacement mode breaks down upon an increase of RBC volume fraction, while the zigzag mode remains relatively robust. This suggests that the separation of larger particles (such as white blood cells) from a dense RBC background is simpler than separating smaller particles (such as platelets) from the same background. The observed breakdown stems from non-deterministic particle collisions interfering with the designed deterministic nature of DLD devices. Therefore, we postulate that dense suspension effects generally hamper efficient particle separation in devices based on deterministic principles.

  5. Operational Global Deterministic and Ensemble Wave Prediction Systems at Environment Canada

    NASA Astrophysics Data System (ADS)

    Bernier, Natacha; Peel, Syd; Bélanger, Jean-Marc; Roch, Michel; Lépine, Mario; Pellerin, Pierre; Henrique Alves, José; Tolman, Hendrik

    2015-04-01

    Canada's new global deterministic and ensemble wave prediction systems are presented together with an evaluation of their performance over a 5 month hindcast. Particular attention is paid to the Arctic Ocean where accurate forecasts are crucial for maintaining safe activities such as drilling, and vessel operation. The wave prediction systems are based on WAVEWATCHIII and are operated at grid spacings of 1/4° (deterministic) and 1/2 ° (ensemble). Both systems are run twice daily with lead times of 120h (5 days) for the deterministic systems and 240h (10 days) for the ensemble system. The wave prediction systems will be shown to have skill in forecasting significant wave height and peak period over the future several days. Beyond lead times of 120h, deterministic forecasts are extended using ensembles of wave forecasts to generate probabilistic forecasts for long-range events. New displays will be used to summarize the wealth of information generated by ensembles into depictions that could help support early warning systems.

  6. Deterministic Chaos in Open Well-stirred Bray-Liebhafsky Reaction System

    NASA Astrophysics Data System (ADS)

    Kolar-Anić, Ljiljana; Vukojević, Vladana; Pejić, Nataša; Grozdić, Tomislav; Anić, Slobodan

    2004-12-01

    Dynamics of the Bray-Liebhafsky (BL) oscillatory reaction is analyzed in a Continuously-fed well-Stirred Thank Reactor (CSTR). Deterministic chaos is found under different conditions, when temperature and acidity are chosen as control parameters. Dynamic patterns observed in real experiments are also numerically simulated.

  7. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  8. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2015-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  9. A Comparison of Deterministic and Probabilistic Approaches to Measuring Learning Structures.

    ERIC Educational Resources Information Center

    Wilson, Mark

    1989-01-01

    Structure of the Observed Learning Outcome (SOLO) science superitems were examined from the perspectives of Guttman Scaling (deterministic) and Item Response Theory (probabilistic). Differences between the measurement bases for the two approaches, and the results for a small case study, are reported. (Author/MLW)

  10. Deterministic linear-optics quantum computing based on a hybrid approach

    SciTech Connect

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  11. Deterministic LOCC transformation of three-qubit pure states and entanglement transfer

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu

    2013-02-01

    A necessary and sufficient condition of the possibility of a deterministic local operations and classical communication (LOCC) transformation of three-qubit pure states is given. The condition shows that the three-qubit pure states are a partially ordered set parametrized by five well-known entanglement parameters and a novel parameter; the five are the concurrences CAB, CAC, CBC, the tangle τABC and the fifth parameter J5 of Acín et al. (2000) Ref. [19], while the other new one is the entanglement charge Qe. The order of the partially ordered set is defined by the possibility of a deterministic LOCC transformation from a state to another state. In this sense, the present condition is an extension of Nielsen's work (Nielsen (1999) [14]) to three-qubit pure states. We also clarify the rules of transfer and dissipation of the entanglement which is caused by deterministic LOCC transformations. Moreover, the minimum number of times of measurements to reproduce an arbitrary deterministic LOCC transformation between three-qubit pure states is given.

  12. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  13. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  14. Deterministic Quantum Key Distribution Using Two Non-orthogonal Entangled States

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Zeng, Gui-Hua

    2007-03-01

    A deterministic quantum key distribution scheme using two non-orthogonal entangled states is proposed. In the proposed scheme, communicators share key information by exchanging one travelling photon with two random and secret polarization angles. The security of the distributed key is guaranteed by three checking phases in three-way channel and the communicators' secret polarization angles.

  15. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    EPA Science Inventory

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  16. Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers

    SciTech Connect

    Saha, Sourabh K.; Culpepper, Martin L.

    2016-04-25

    Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.

  17. Non-Deterministic, Non-Traditional Methods (NDNTM)

    NASA Technical Reports Server (NTRS)

    Cruse, Thomas A.; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The review effort identified research opportunities related to the use of nondeterministic, nontraditional methods to support aerospace design. The scope of the study was restricted to structural design rather than other areas such as control system design. Thus, the observations and conclusions are limited by that scope. The review identified a number of key results. The results include the potential for NASA/AF collaboration in the area of a design environment for advanced space access vehicles. The following key points set the context and delineate the key results. The Principal Investigator's (PI's) context for this study derived from participation as a Panel Member in the Air Force Scientific Advisory Board (AF/SAB) Summer Study Panel on 'Whither Hypersonics?' A key message from the Summer Study effort was a perceived need for a national program for a space access vehicle whose operating characteristics of cost, availability, deployability, and reliability most closely match the NASA 3rd Generation Reusable Launch Vehicle (RLV). The Panel urged the AF to make a significant joint commitment to such a program just as soon as the AF defined specific requirements for space access consistent with the AF Aerospace Vision 2020. The review brought home a concurrent need for a national vehicle design environment. Engineering design system technology is at a time point from which a revolution as significant as that brought about by the finite element method is possible, this one focusing on information integration on a scale that far surpasses current design environments. The study therefore fully supported the concept, if not some of the details of the Intelligent Synthesis Environment (ISE). It became abundantly clear during this study that the government (AF, NASA) and industry are not moving in the same direction in this regard, in fact each is moving in its own direction. NASA/ISE is not yet in an effective leadership position in this regard. However, NASA does

  18. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks

    PubMed Central

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E.; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future. PMID:26485278

  19. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  20. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  1. Eye growth and myopia development: Unifying theory and Matlab model.

    PubMed

    Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal

    2016-03-01

    The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs

  2. Edwards Air Force Base Accelerates Flight Test Data Analysis Using MATLAB(Registered) and MathWorks(Trademark)

    DTIC Science & Technology

    2010-06-10

    conference. 15. SUBJECT TERMS Global Hawk, MATLAB®, Parallel, Computing, Distributed Computing , Simulink 16. SECURITY CLASSIFICATION OF: Unclassified...used Parallel Computing Toolbox to prepare the MATLAB ® scripts to be executed by MATLAB ® Distributed Computing Server workers running on the...16 times faster. Parallel Computing Toolbox and MATLAB ® Distributed Computing Server provided a one-for-one time savings with the number of

  3. MATLAB-based automated patch-clamp system for awake behaving mice.

    PubMed

    Desai, Niraj S; Siegel, Jennifer J; Taylor, William; Chitwood, Raymond A; Johnston, Daniel

    2015-08-01

    Automation has been an important part of biomedical research for decades, and the use of automated and robotic systems is now standard for such tasks as DNA sequencing, microfluidics, and high-throughput screening. Recently, Kodandaramaiah and colleagues (Nat Methods 9: 585-587, 2012) demonstrated, using anesthetized animals, the feasibility of automating blind patch-clamp recordings in vivo. Blind patch is a good target for automation because it is a complex yet highly stereotyped process that revolves around analysis of a single signal (electrode impedance) and movement along a single axis. Here, we introduce an automated system for blind patch-clamp recordings from awake, head-fixed mice running on a wheel. In its design, we were guided by 3 requirements: easy-to-use and easy-to-modify software; seamless integration of behavioral equipment; and efficient use of time. The resulting system employs equipment that is standard for patch recording rigs, moderately priced, or simple to make. It is written entirely in MATLAB, a programming environment that has an enormous user base in the neuroscience community and many available resources for analysis and instrument control. Using this system, we obtained 19 whole cell patch recordings from neurons in the prefrontal cortex of awake mice, aged 8-9 wk. Successful recordings had series resistances that averaged 52 ± 4 MΩ and required 5.7 ± 0.6 attempts to obtain. These numbers are comparable with those of experienced electrophysiologists working manually, and this system, written in a simple and familiar language, will be useful to many cellular electrophysiologists who wish to study awake behaving mice.

  4. MATLAB-based automated patch-clamp system for awake behaving mice

    PubMed Central

    Siegel, Jennifer J.; Taylor, William; Chitwood, Raymond A.; Johnston, Daniel

    2015-01-01

    Automation has been an important part of biomedical research for decades, and the use of automated and robotic systems is now standard for such tasks as DNA sequencing, microfluidics, and high-throughput screening. Recently, Kodandaramaiah and colleagues (Nat Methods 9: 585–587, 2012) demonstrated, using anesthetized animals, the feasibility of automating blind patch-clamp recordings in vivo. Blind patch is a good target for automation because it is a complex yet highly stereotyped process that revolves around analysis of a single signal (electrode impedance) and movement along a single axis. Here, we introduce an automated system for blind patch-clamp recordings from awake, head-fixed mice running on a wheel. In its design, we were guided by 3 requirements: easy-to-use and easy-to-modify software; seamless integration of behavioral equipment; and efficient use of time. The resulting system employs equipment that is standard for patch recording rigs, moderately priced, or simple to make. It is written entirely in MATLAB, a programming environment that has an enormous user base in the neuroscience community and many available resources for analysis and instrument control. Using this system, we obtained 19 whole cell patch recordings from neurons in the prefrontal cortex of awake mice, aged 8–9 wk. Successful recordings had series resistances that averaged 52 ± 4 MΩ and required 5.7 ± 0.6 attempts to obtain. These numbers are comparable with those of experienced electrophysiologists working manually, and this system, written in a simple and familiar language, will be useful to many cellular electrophysiologists who wish to study awake behaving mice. PMID:26084901

  5. nSTAT: open-source neural spike train analysis toolbox for Matlab.

    PubMed

    Cajigas, I; Malik, W Q; Brown, E N

    2012-11-15

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - generalized linear model (PP-GLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT--an open source neural spike train analysis toolbox for Matlab®. By adopting an object-oriented programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems.

  6. Fabrication of the Advanced X-ray Astrophysics Facility (AXAF) Optics: A Deterministic, Precision Engineering Approach to Optical Fabrication

    NASA Technical Reports Server (NTRS)

    Gordon, T. E.

    1995-01-01

    The mirror assembly of the AXAF observatory consists of four concentric, confocal, Wolter type 1 telescopes. Each telescope includes two conical grazing incidence mirrors, a paraboloid followed by a hyperboloid. Fabrication of these state-or-the-art optics is now complete, with predicted performance that surpasses the goals of the program. The fabrication of these optics, whose size and requirements exceed those of any previous x-ray mirrors, presented a challenging task requiring the use of precision engineering in many different forms. Virtually all of the equipment used for this effort required precision engineering. Accurate metrology required deterministic support of the mirrors in order to model the gravity distortions which will not be present on orbit. The primary axial instrument, known as the Precision Metrology Station (PMS), was a unique scanning Fizeau interferometer. After metrology was complete, the optics were placed in specially designed Glass Support Fixtures (GSF's) for installation on the Automated Cylindrical Grinder/Polishers (ACG/P's). The GSF's were custom molded for each mirror element to match the shape of the outer surface to minimize distortions of the inner surface. The final performance of the telescope is expected to far exceed the original goals and expectations of the program.

  7. DANTSYS/MPI: a system for 3-D deterministic transport on parallel architectures

    SciTech Connect

    Baker, R.S.; Alcouffe, R.E.

    1996-12-31

    Since 1994, we have been using a data parallel form of our deterministic transport code DANTSYS to perform time-independent fixed source and eigenvalue calculations on the CM-200`s at Los Alamos National Laboratory (LANL). Parallelization of the transport sweep is obtained by using a 2-D spatial decomposition which retains the ability to invert the source iteration equation in a single iteration (i.e., the diagonal plane sweep). We have now implemented a message passing version of DANTSYS, referred to as DANTSYS/MPI, on the Cray T3D installed at Los Alamos in 1995. By taking advantage of the SPMD (Single Program, Multiple Data) architecture of the Cray T3D, as well as its low latency communications network, we have managed to achieve grind times (time to solve a single cell in phase space) of less than 10 nanoseconds on the 512 PE (Processing Element) T3D, as opposed to typical grind times of 150-200 nanoseconds on a 2048 PE CM-200, or 300-400 nanoseconds on a single PE of a Cray Y-MP. In addition, we have also parallelized the Diffusion Synthetic Accelerator (DSA) equations which are used to accelerate the convergence of the transport equation. DANTSYS/MPI currently runs on traditional Cray PVP`s and the Cray T3D, and it`s computational kernel (Sweep3D) has been ported to and tested on an array of SGI SMP`s (Symmetric Memory Processors), a network of IBM 590 workstations, an IBM SP2, and the Intel TFLOPs machine at Sandia National Laboratory. This paper describes the implementation of DANTSYS/MPI on the Cray T3D, and presents a simple performance model which accurately predicts the grind time as a function of the number of PE`s and problem size, or scalability. This paper also describes the parallel implementation and performance of the elliptic solver used in DANTSYS/MPI for solving the synthetic acceleration equations.

  8. A new Matlab coder for generating Structured Text Language from matrix expression for PLC and PAC controllers

    NASA Astrophysics Data System (ADS)

    Buciakowski, Mariusz; Witczak, Piotr

    2017-01-01

    This paper presents a new Matlab toolbox for synthesis of Structured Text (ST) code for Programmable Logic Controllers (PLC) and Programmable Automation Controller (PAC). This tool can directly generate IEC 61131-3 Structured Text Language from Matlab script for selected Integrated Development Environments (IDEs). The generated code can be verified and compared with the results obtained with Matlab simulation. After this, generated code can be used in IDEs, compiled and uploaded to a PLC or PAC controller for final verification. This approach leaves all available Matlab toolboxes for programmers use, thus allowing fast and easy synthesis developed algorithms.

  9. Changing patient population in Dhaka Hospital and Matlab Hospital of icddr,b.

    PubMed

    Das, S K; Rahman, A; Chisti, M J; Ahmed, S; Malek, M A; Salam, M A; Bardhan, P K; Faruque, A S G

    2014-02-01

    The Diarrhoeal Disease Surveillance System of icddr,b noted increasing number of patients ≥60 years at urban Dhaka and rural Matlab from 2001 to 2012. Shigella and Vibrio cholerae were more frequently isolated from elderly people than children under 5 years and adults aged 5-59 in both areas. The resistance observed to various drugs of Shigella in Dhaka and Matlab was trimethoprim-sulphamethoxazole (72-63%), ampicillin (43-55%), nalidixic acid (58-61%), mecillinam (12-9%), azithromycin (13-0%), ciprofloxacin (11-13%) and ceftriaxone (11-0%). Vibrio cholerae isolated in Dhaka and Matlab was resistant to trimethoprim-sulphamethoxazole (98-94%), furazolidone (100%), erythromycin (71-53%), tetracycline (46-44%), ciprofloxacin (3-10%) and azithromycin (3-0%).

  10. Integration of MATLAB Simulink(Registered Trademark) Models with the Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Lewis, Emily K.; Vuong, Nghia D.

    2012-01-01

    This paper describes the integration of MATLAB Simulink(Registered TradeMark) models into the Vertical Motion Simulator (VMS) at NASA Ames Research Center. The VMS is a high-fidelity, large motion flight simulator that is capable of simulating a variety of aerospace vehicles. Integrating MATLAB Simulink models into the VMS needed to retain the development flexibility of the MATLAB environment and allow rapid deployment of model changes. The process developed at the VMS was used successfully in a number of recent simulation experiments. This accomplishment demonstrated that the model integrity was preserved, while working within the hard real-time run environment of the VMS architecture, and maintaining the unique flexibility of the VMS to meet diverse research requirements.

  11. Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Ruebel, Oliver; Keranen, Soile V.E.; Biggin, Mark; Knowles, David W.; Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2011-03-30

    Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila Transcription Network Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point-Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions. We address this challenge by linking PCX and Matlab via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.

  12. A Matlab library for solving quasi-static volume conduction problems using the boundary element method.

    PubMed

    Stenroos, M; Mäntynen, V; Nenonen, J

    2007-12-01

    The boundary element method (BEM) is commonly used in the modeling of bioelectromagnetic phenomena. The Matlab language is increasingly popular among students and researchers, but there is no free, easy-to-use Matlab library for boundary element computations. We present a hands-on, freely available Matlab BEM source code for solving bioelectromagnetic volume conduction problems and any (quasi-)static potential problems that obey the Laplace equation. The basic principle of the BEM is presented and discretization of the surface integral equation for electric potential is worked through in detail. Contents and design of the library are described, and results of example computations in spherical volume conductors are validated against analytical solutions. Three application examples are also presented. Further information, source code for application examples, and information on obtaining the library are available in the WWW-page of the library: (http://biomed.tkk.fi/BEM).

  13. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    NASA Astrophysics Data System (ADS)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer

  14. A pedagogical walkthrough of computational modeling and simulation of Wnt signaling pathway using static causal models in MATLAB.

    PubMed

    Sinha, Shriprakash

    2016-12-01

    Simulation study in systems biology involving computational experiments dealing with Wnt signaling pathways abound in literature but often lack a pedagogical perspective that might ease the understanding of beginner students and researchers in transition, who intend to work on the modeling of the pathway. This paucity might happen due to restrictive business policies which enforce an unwanted embargo on the sharing of important scientific knowledge. A tutorial introduction to computational modeling of Wnt signaling pathway in a human colorectal cancer dataset using static Bayesian network models is provided. The walkthrough might aid biologists/informaticians in understanding the design of computational experiments that is interleaved with exposition of the Matlab code and causal models from Bayesian network toolbox. The manuscript elucidates the coding contents of the advance article by Sinha (Integr. Biol. 6:1034-1048, 2014) and takes the reader in a step-by-step process of how (a) the collection and the transformation of the available biological information from literature is done, (b) the integration of the heterogeneous data and prior biological knowledge in the network is achieved, (c) the simulation study is designed, (d) the hypothesis regarding a biological phenomena is transformed into computational framework, and (e) results and inferences drawn using d-connectivity/separability are reported. The manuscript finally ends with a programming assignment to help the readers get hands-on experience of a perturbation project. Description of Matlab files is made available under GNU GPL v3 license at the Google code project on https://code.google.com/p/static-bn-for-wnt-signaling-pathway and https: //sites.google.com/site/shriprakashsinha/shriprakashsinha/projects/static-bn-for-wnt-signaling-pathway. Latest updates can be found in the latter website.

  15. A platform for dynamic simulation and control of movement based on OpenSim and MATLAB.

    PubMed

    Mansouri, Misagh; Reinbolt, Jeffrey A

    2012-05-11

    Numerical simulations play an important role in solving complex engineering problems and have the potential to revolutionize medical decision making and treatment strategies. In this paper, we combine the rapid model-based design, control systems and powerful numerical method strengths of MATLAB/Simulink with the simulation and human movement dynamics strengths of OpenSim by developing a new interface between the two software tools. OpenSim is integrated with Simulink using the MATLAB S-function mechanism, and the interface is demonstrated using both open-loop and closed-loop control systems. While the open-loop system uses MATLAB/Simulink to separately reproduce the OpenSim Forward Dynamics Tool, the closed-loop system adds the unique feature of feedback control to OpenSim, which is necessary for most human movement simulations. An arm model example was successfully used in both open-loop and closed-loop cases. For the open-loop case, the simulation reproduced results from the OpenSim Forward Dynamics Tool with root mean square (RMS) differences of 0.03° for the shoulder elevation angle and 0.06° for the elbow flexion angle. MATLAB's variable step-size integrator reduced the time required to generate the forward dynamic simulation from 7.1s (OpenSim) to 2.9s (MATLAB). For the closed-loop case, a proportional-integral-derivative controller was used to successfully balance a pole on model's hand despite random force disturbances on the pole. The new interface presented here not only integrates the OpenSim and MATLAB/Simulink software tools, but also will allow neuroscientists, physiologists, biomechanists, and physical therapists to adapt and generate new solutions as treatments for musculoskeletal conditions.

  16. BioSigPlot: an opensource tool for the visualization of multi-channel biomedical signals with Matlab.

    PubMed

    Boudet, Samuel; Peyrodie, Laurent; Gallois, Philippe; de l'Aulnoit, Denis Houzé; Cao, Hua; Forzy, Gérard

    2013-01-01

    This paper presents a Matlab-based software (MathWorks inc.) called BioSigPlot for the visualization of multi-channel biomedical signals, particularly for the EEG. This tool is designed for researchers on both engineering and medicine who have to collaborate to visualize and analyze signals. It aims to provide a highly customizable interface for signal processing experimentation in order to plot several kinds of signals while integrating the common tools for physician. The main advantages compared to other existing programs are the multi-dataset displaying, the synchronization with video and the online processing. On top of that, this program uses object oriented programming, so that the interface can be controlled by both graphic controls and command lines. It can be used as EEGlab plug-in but, since it is not limited to EEG, it would be distributed separately. BioSigPlot is distributed free of charge (http://biosigplot.sourceforge.net), under the terms of GNU Public License for non-commercial use and open source development.

  17. A MATLAB package for the EIDORS project to reconstruct two-dimensional EIT images.

    PubMed

    Vauhkonen, M; Lionheart, W R; Heikkinen, L M; Vauhkonen, P J; Kaipio, J P

    2001-02-01

    The EIDORS (electrical impedance and diffuse optical reconstruction software) project aims to produce a software system for reconstructing images from electrical or diffuse optical data. MATLAB is a software that is used in the EIDORS project for rapid prototyping, graphical user interface construction and image display. We have written a MATLAB package (http://venda.uku.fi/ vauhkon/) which can be used for two-dimensional mesh generation, solving the forward problem and reconstructing and displaying the reconstructed images (resistivity or admittivity). In this paper we briefly describe the mathematical theory on which the codes are based on and also give some examples of the capabilities of the package.

  18. matNMR: a flexible toolbox for processing, analyzing and visualizing magnetic resonance data in Matlab.

    PubMed

    van Beek, Jacco D

    2007-07-01

    matNMR (matnmr.sourceforge.net) is a toolbox for processing, analyzing and visualizing magnetic-resonance data within the Matlab environment (www.mathworks.com) that aims for control, flexibility and extendability. Processing can be done using either a graphical user interface or with command-line scripts, both of which allow user-defined processing or analysis functions to be incorporated at any time. The direct access to data points during processing, and the extensive library of mathematical and visualization routines provided by Matlab, afford the high degree of control and flexibility needed in modern magnetic-resonance research.

  19. Improve Problem Solving Skills through Adapting Programming Tools

    NASA Technical Reports Server (NTRS)

    Shaykhian, Linda H.; Shaykhian, Gholam Ali

    2007-01-01

    There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.

  20. Introduction to Parallel Programming and pMatlab v2.0

    DTIC Science & Technology

    2011-01-01

    further. Even many modern desktop machines are multiprocessors, supporting two CPUs. On the other end is the Beowulf cluster which uses commodity...networks (e.g. Gigabit Ethernet) to connect commodity computers (e.g. Dell) [3]. The advantage of Beowulf clusters is their lower costs and...greater availability of components. Performance of Beowulf clusters has begun to rival that of traditional “supercomputers.” In November 2003, the TOP500

  1. Computer Model for Manipulation of a Multibody System Using MATLAB

    DTIC Science & Technology

    2013-09-01

    format for constructing input files. The report also provides a user’s guide for a graphical user interface that was created . 15. SUBJECT TERMS body...11 1 1. Background This report discusses a program that was written in order to support the FragFly model. The FragFly model was created ...to assess incapacitation of human targets given a detonation of a fragmenting munition. The model creates fragments using a ZDATA file (see “Proposed

  2. Spirally polarized beams for polarimetry measurements of deterministic and homogeneous samples

    NASA Astrophysics Data System (ADS)

    de Sande, Juan Carlos González; Santarsiero, Massimo; Piquero, Gemma

    2017-04-01

    The use of spirally polarized beams (SPBs) in polarimetric measurements of homogeneous and deterministic samples is proposed. Since across any transverse plane such beams present all possible linearly polarized states at once, the complete Mueller matrix of deterministic samples can be recovered with a reduced number of measurements and small errors. Furthermore, SPBs present the same polarization pattern across any transverse plane during propagation, and the same happens for the field propagated after the sample, so that both the sample plane and the plane where the polarization of the field is measured can be chosen at will. Experimental results are presented for the particular case of an azimuthally polarized beam and samples consisting of rotated retardation plates and linear polarizers.

  3. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    SciTech Connect

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)

  4. Fast deterministic switching in orthogonal spin torque devices via the control of the relative spin polarizations

    SciTech Connect

    Park, Junbo; Buhrman, R. A.; Ralph, D. C.

    2013-12-16

    We model 100 ps pulse switching dynamics of orthogonal spin transfer (OST) devices that employ an out-of-plane polarizer and an in-plane polarizer. Simulation results indicate that increasing the spin polarization ratio, C{sub P} = P{sub IPP}/P{sub OPP}, results in deterministic switching of the free layer without over-rotation (360° rotation). By using spin torque asymmetry to realize an enhanced effective P{sub IPP}, we experimentally demonstrate this behavior in OST devices in parallel to anti-parallel switching. Modeling predicts that decreasing the effective demagnetization field can substantially reduce the minimum C{sub P} required to attain deterministic switching, while retaining low critical switching current, I{sub p} ∼ 500 μA.

  5. Deterministic quantum-public-key encryption: Forward search attack and randomization

    NASA Astrophysics Data System (ADS)

    Nikolopoulos, Georgios M.; Ioannou, Lawrence M.

    2009-04-01

    In the classical setting, public-key encryption requires randomness in order to be secure against a forward search attack, whereby an adversary compares the encryption of a guess of the secret message with the encryption of the actual secret message. We show that this is also true in the information-theoretic setting—where the public keys are quantum systems—by defining and giving an example of a forward search attack for any deterministic quantum-public-key bit-encryption scheme. However, unlike in the classical setting, we show that any such deterministic scheme can be used as a black box to build a randomized bit-encryption scheme that is no longer susceptible to this attack.

  6. Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity

    SciTech Connect

    Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor

    2014-12-29

    The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.

  7. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    NASA Astrophysics Data System (ADS)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  8. Transmission Microscopy with Nanometer Resolution Using a Deterministic Single Ion Source

    NASA Astrophysics Data System (ADS)

    Jacob, Georg; Groot-Berning, Karin; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T.; Poschinger, Ulrich G.; Schmidt-Kaler, Ferdinand; Singer, Kilian

    2016-07-01

    We realize a single particle microscope by using deterministically extracted laser-cooled 40Ca+ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions to a spot size of 5.8 ±1.0 nm and a minimum two-sample deviation of the beam position of 1.5 nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to a fivefold increase in the signal-to-noise ratio as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1 μ m circular hole structure to a precision of 2.7 nm using only 579 probe particles.

  9. Integrated deterministic and probabilistic safety analysis for safety assessment of nuclear power plants

    SciTech Connect

    Di Maio, Francesco; Zio, Enrico; Smith, Curtis; Rychkov, Valentin

    2015-07-06

    The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs and activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).

  10. Integrated deterministic and probabilistic safety analysis for safety assessment of nuclear power plants

    DOE PAGES

    Di Maio, Francesco; Zio, Enrico; Smith, Curtis; ...

    2015-07-06

    The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less

  11. Spectral shifts generated by scattering of Gaussian Schell-model arrays beam from a deterministic medium

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Liu, Zhirong; Huang, Kelin; Sun, Jingbo

    2017-03-01

    According to the theory of first-order Born approximation, analytical expressions for Gaussian Schell-model arrays (GSMA) beam scattered on a deterministic medium in the far-zone are derived. In terms of the analytical formula obtained, shifts of GSMA beam's scattered spectrum are numerically investigated. Results show that the scattering directions sx and sy, effective radius σ of the scattering medium, the initial beam transverse width σ0, correlation widths δx and δy of the source, and line width Γ0 of the incident spectrum closely influence the distributions of normalized scattered spectrum in the far-zone. These features of GSMA beam scattered spectrum could be used to obtain information about the structure of a deterministic medium.

  12. Scaling of weighted spectral distribution in deterministic scale-free networks

    NASA Astrophysics Data System (ADS)

    Jiao, Bo; Nie, Yuan-ping; Shi, Jian-mai; Huang, Cheng-dong; Zhou, Ying; Du, Jing; Guo, Rong-hua; Tao, Ye-rong

    2016-06-01

    Scale-free networks are abundant in the real world. In this paper, we investigate the scaling properties of the weighted spectral distribution in several deterministic and stochastic models of evolving scale-free networks. First, we construct a new deterministic scale-free model whose node degrees have a unified format. Using graph structure features, we derive a precise formula for the spectral metric in this model. This formula verifies that the spectral metric grows sublinearly as network size (i.e., the number of nodes) grows. Additionally, the mathematical reasoning of the precise formula theoretically provides detailed explanations for this scaling property. Finally, we validate the scaling properties of the spectral metric using some stochastic models. The experimental results show that this scaling property can be retained regardless of local world, node deleting and assortativity adjustment.

  13. Deterministic Combatmodels, Interim Report 1: Literature Research (Deterministische Gevechtsmodellen, Interim Rapport 1: Literatuur Onderzoek)

    DTIC Science & Technology

    1991-12-01

    This report contain-P a description of a study of the existing literature on deterministic zombai-models. The Lancheste quations are described, both...nadeel bieraan kleeft dat het uitvoeren van een studie met een dergelijk model redelijk veel tijd kost. Dit koint doordat er een groot aantal runs moet...oprijzen. De aannaznen die aan de kiassieke Lanchestervergehjkingen voor gericht vuur ten grondslag liggen zijn de volgende: " Elke partij is intern

  14. A new approach to state estimation in deterministic digital control systems

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1987-01-01

    The paper presents a new approach to state estimation in deterministic digital control systems. The scheme is based on sampling the output of the plant at a high rate and prefiltering the discrete measurements in a multi-input/multi-output moving average (MA) process. The coefficient matrices in the MA prefilter are selected so the estimated state equals the true state. An example is presented which illustrates the procedure to follow to completely design the estimator.

  15. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  16. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    PubMed Central

    Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano

    2009-01-01

    Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345

  17. Mesh generation and energy group condensation studies for the jaguar deterministic transport code

    SciTech Connect

    Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.

    2012-07-01

    The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)

  18. On the application of deterministic optimization methods to stochastic control problems.

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.; Athans, M.

    1972-01-01

    A technique is presented by which one can apply the Minimum Principle of Pontryagin to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being essentially deterministic. The technique is applied to systems with quadratic and non-quadratic costs to illustrate its use.

  19. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  20. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    SciTech Connect

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; Wagner, Robert M.; Edwards, K. Dean; Green, Johney B.

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes and thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.