Optimized random phase only holograms.
Zea, Alejandro Velez; Barrera Ramirez, John Fredy; Torroba, Roberto
2018-02-15
We propose a simple and efficient technique capable of generating Fourier phase only holograms with a reconstruction quality similar to the results obtained with the Gerchberg-Saxton (G-S) algorithm. Our proposal is to use the traditional G-S algorithm to optimize a random phase pattern for the resolution, pixel size, and target size of the general optical system without any specific amplitude data. This produces an optimized random phase (ORAP), which is used for fast generation of phase only holograms of arbitrary amplitude targets. This ORAP needs to be generated only once for a given optical system, avoiding the need for costly iterative algorithms for each new target. We show numerical and experimental results confirming the validity of the proposal.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
Random search optimization based on genetic algorithm and discriminant function
NASA Technical Reports Server (NTRS)
Kiciman, M. O.; Akgul, M.; Erarslanoglu, G.
1990-01-01
The general problem of optimization with arbitrary merit and constraint functions, which could be convex, concave, monotonic, or non-monotonic, is treated using stochastic methods. To improve the efficiency of the random search methods, a genetic algorithm for the search phase and a discriminant function for the constraint-control phase were utilized. The validity of the technique is demonstrated by comparing the results to published test problem results. Numerical experimentation indicated that for cases where a quick near optimum solution is desired, a general, user-friendly optimization code can be developed without serious penalties in both total computer time and accuracy.
Takahashi, Fumihiro; Morita, Satoshi
2018-02-08
Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.
Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S
2016-11-01
For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Huss, Michael; Ginsberg, Ylva; Arngrim, Torben; Philipsen, Alexandra; Carter, Katherine; Chen, Chien-Wei; Gandhi, Preetam; Kumar, Vinod
2014-09-01
In the management of attention-deficit hyperactivity disorder (ADHD) in adults it is important to recognize that individual patients respond to a wide range of methylphenidate doses. Studies with methylphenidate modified release long acting (MPH-LA) in children have reported the need for treatment optimization for improved outcomes. We report the results from a post hoc analysis of a 5-week dose optimization phase from a large randomized, placebo-controlled, multicenter 40-week study (9-week double-blind dose confirmation phase, 5-week open-label dose optimization phase, and 26-week double-blind maintenance of effect phase). Patients entering the open-label dose optimization phase initiated treatment with MPH-LA 20 mg/day; up/down titrated to their optimal dose (at which there was balance between control of symptoms and side effects) of 40, 60, or 80 mg/day in increments of 20 mg/week by week 12 or 13. Safety was assessed by monitoring the adverse events (AEs) and serious AEs. Efficacy was assessed by the Diagnostic and Statistical Manual of Mental Disorders, fourth edition, Attention-Deficit Hyperactivity Disorder Rating Scale (DSM-IV ADHD RS) and Sheehan Disability Scale (SDS) total scores. At the end of the dose confirmation phase, similar numbers of patients were treated optimally with each of the 40, 60, and 80 mg/day doses (152, 177, and 160, respectively) for MPH-LA. Mean improvement from baseline in the dose confirmation phase in total scores of DSM-IV ADHD RS and SDS were 23.5 ± 9.90 and 9.7 ± 7.36, respectively. Dose optimization with MPH-LA (40, 60, or 80 mg/day) improved treatment outcomes and was well-tolerated in adult ADHD patients.
NASA Astrophysics Data System (ADS)
Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching
2000-10-01
We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea
2014-03-15
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.
Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea
2014-01-01
To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289
Continuous-variable phase estimation with unitary and random linear disturbance
NASA Astrophysics Data System (ADS)
Delgado de Souza, Douglas; Genoni, Marco G.; Kim, M. S.
2014-10-01
We address the problem of continuous-variable quantum phase estimation in the presence of linear disturbance at the Hamiltonian level by means of Gaussian probe states. In particular we discuss both unitary and random disturbance by considering the parameter which characterizes the unwanted linear term present in the Hamiltonian as fixed (unitary disturbance) or random with a given probability distribution (random disturbance). We derive the optimal input Gaussian states at fixed energy, maximizing the quantum Fisher information over the squeezing angle and the squeezing energy fraction, and we discuss the scaling of the quantum Fisher information in terms of the output number of photons, nout. We observe that, in the case of unitary disturbance, the optimal state is a squeezed vacuum state and the quadratic scaling is conserved. As regards the random disturbance, we observe that the optimal squeezing fraction may not be equal to one and, for any nonzero value of the noise parameter, the quantum Fisher information scales linearly with the average number of photons. Finally, we discuss the performance of homodyne measurement by comparing the achievable precision with the ultimate limit imposed by the quantum Cramér-Rao bound.
Chen, Kai; Lynen, Frédéric; De Beer, Maarten; Hitzel, Laure; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-11-12
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation by using a combination of different stationary phases. Previous work has shown that SOSLC offers excellent possibilities for method development, especially after the recent modification towards linear gradient SOSLC. The present work is aimed at developing and extending the SOSLC approach towards selectivity optimization and method development for green chromatography. Contrary to current LC practices, a green mobile phase (water/ethanol/formic acid) is hereby preselected and the composition of the stationary phase is optimized under a given gradient profile to obtain baseline resolution of all target solutes in the shortest possible analysis time. With the algorithm adapted to the high viscosity property of ethanol, the principle is illustrated with a fast, full baseline resolution for a randomly selected mixture composed of sulphonamides, xanthine alkaloids and steroids. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen
2017-06-01
In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.
Measurement Matrix Design for Phase Retrieval Based on Mutual Information
NASA Astrophysics Data System (ADS)
Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.
2018-01-01
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Chun Chia; Zhao, Rong, E-mail: zhao-rong@sutd.edu.sg; Chong, Tow Chong
2014-10-13
Nitrogen-doped titanium-tungsten (N-TiW) was proposed as a tunable heater in Phase Change Random Access Memory (PCRAM). By tuning N-TiW's material properties through doping, the heater can be tailored to optimize the access speed and programming current of PCRAM. Experiments reveal that N-TiW's resistivity increases and thermal conductivity decreases with increasing nitrogen-doping ratio, and N-TiW devices displayed (∼33% to ∼55%) reduced programming currents. However, there is a tradeoff between the current and speed for heater-based PCRAM. Analysis of devices with different N-TiW heaters shows that N-TiW doping levels could be optimized to enable low RESET currents and fast access speeds.
Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng
2017-10-13
MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.
Shah-Basak, Priyanka P.; Norise, Catherine; Garcia, Gabriella; Torres, Jose; Faseyitan, Olufunsho; Hamilton, Roy H.
2015-01-01
While evidence suggests that transcranial direct current stimulation (tDCS) may facilitate language recovery in chronic post-stroke aphasia, individual variability in patient response to different patterns of stimulation remains largely unexplored. We sought to characterize this variability among chronic aphasic individuals, and to explore whether repeated stimulation with an individualized optimal montage could lead to persistent reduction of aphasia severity. In a two-phase study, we first stimulated patients with four active montages (left hemispheric anode or cathode; right hemispheric anode or cathode) and one sham montage (Phase 1). We examined changes in picture naming ability to address (1) variability in response to different montages among our patients, and (2) whether individual patients responded optimally to at least one montage. During Phase 2, subjects who responded in Phase 1 were randomized to receive either real-tDCS or to receive sham stimulation (10 days); patients who were randomized to receive sham stimulation first were then crossed over to receive real-tDCS (10 days). In both phases, 2 mA tDCS was administered for 20 min per real-tDCS sessions and patients performed a picture naming task during stimulation. Patients' language ability was re-tested after 2-weeks and 2-months following real and sham tDCS in Phase 2. In Phase 1, despite considerable individual variability, the greatest average improvement was observed after left-cathodal stimulation. Seven out of 12 subjects responded optimally to at least one montage as demonstrated by transient improvement in picture-naming. In Phase 2, aphasia severity improved at 2-weeks and 2-months following real-tDCS but not sham. Despite individual variability with respect to optimal tDCS approach, certain montages result in consistent transient improvement in persons with chronic post-stroke aphasia. This preliminary study supports the notion that individualized tDCS treatment may enhance aphasia recovery in a persistent manner. PMID:25954178
Combining local search with co-evolution in a remarkably simple way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.
2000-05-01
The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less
von Krempelhuber, Alfred; Vollmar, Jens; Pokorny, Rolf; Rapp, Petra; Wulff, Niels; Petzold, Barbara; Handley, Amanda; Mateo, Lyn; Siersbol, Henriette; Kollaritsch, Herwig; Chaplin, Paul
2009-01-01
IMVAMUNE® is a Modified Vaccinia Ankara-based virus that is being developed as a safer 3rd generation smallpox vaccine. In order to determine the optimal dose for further development, a double-blind, randomized Phase II trial was performed testing three different doses of IMVAMUNE® in 164 healthy volunteers. All three IMVAMUNE® doses displayed a favourable safety profile, with local reactions as the most frequent observation. The 1×108 TCID50 IMVAMUNE® dose induced a total antibody response in 94% of the subjects following the first vaccination and the highest peak seroconversion rates by ELISA (100%) and PRNT (71%). This IMVAMUNE® dose was considered to be optimal for the further clinical development of this highly attenuated poxvirus as a safer smallpox vaccine. PMID:19944151
Optimal back-to-front airplane boarding.
Bachmat, Eitan; Khachaturov, Vassilii; Kuperman, Ran
2013-06-01
The problem of finding an optimal back-to-front airplane boarding policy is explored, using a mathematical model that is related to the 1+1 polynuclear growth model with concave boundary conditions and to causal sets in gravity. We study all airplane configurations and boarding group sizes. Optimal boarding policies for various airplane configurations are presented. Detailed calculations are provided along with simulations that support the main conclusions of the theory. We show that the effectiveness of back-to-front policies undergoes a phase transition when passing from lightly congested airplanes to heavily congested airplanes. The phase transition also affects the nature of the optimal or near-optimal policies. Under what we consider to be realistic conditions, optimal back-to-front policies lead to a modest 8-12% improvement in boarding time over random (no policy) boarding, using two boarding groups. Having more than two groups is not effective.
Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin
2017-06-01
In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.
Quantum algorithm for energy matching in hard optimization problems
NASA Astrophysics Data System (ADS)
Baldwin, C. L.; Laumann, C. R.
2018-06-01
We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.
Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
A Model with Darwinian Dynamics on a Rugged Landscape
NASA Astrophysics Data System (ADS)
Brotto, Tommaso; Bunin, Guy; Kurchan, Jorge
2017-02-01
We discuss the population dynamics with selection and random diffusion, keeping the total population constant, in a fitness landscape associated with Constraint Satisfaction, a paradigm for difficult optimization problems. We obtain a phase diagram in terms of the size of the population and the diffusion rate, with a glass phase inside which the dynamics keeps searching for better configurations, and outside which deleterious `mutations' spoil the performance. The phase diagram is analogous to that of dense active matter in terms of temperature and drive.
Analysis of elliptically polarized maximally entangled states for bell inequality tests
NASA Astrophysics Data System (ADS)
Martin, A.; Smirr, J.-L.; Kaiser, F.; Diamanti, E.; Issautier, A.; Alibart, O.; Frey, R.; Zaquine, I.; Tanzilli, S.
2012-06-01
When elliptically polarized maximally entangled states are considered, i.e., states having a non random phase factor between the two bipartite polarization components, the standard settings used for optimal violation of Bell inequalities are no longer adapted. One way to retrieve the maximal amount of violation is to compensate for this phase while keeping the standard Bell inequality analysis settings. We propose in this paper a general theoretical approach that allows determining and adjusting the phase of elliptically polarized maximally entangled states in order to optimize the violation of Bell inequalities. The formalism is also applied to several suggested experimental phase compensation schemes. In order to emphasize the simplicity and relevance of our approach, we also describe an experimental implementation using a standard Soleil-Babinet phase compensator. This device is employed to correct the phase that appears in the maximally entangled state generated from a type-II nonlinear photon-pair source after the photons are created and distributed over fiber channels.
[Optimization of the pseudorandom input signals used for the forced oscillation technique].
Liu, Xiaoli; Zhang, Nan; Liang, Hong; Zhang, Zhengbo; Li, Deyu; Wang, Weidong
2017-10-01
The forced oscillation technique (FOT) is an active pulmonary function measurement technique that was applied to identify the mechanical properties of the respiratory system using external excitation signals. FOT commonly includes single frequency sine, pseudorandom and periodic impulse excitation signals. Aiming at preventing the time-domain amplitude overshoot that might exist in the acquisition of combined multi sinusoidal pseudorandom signals, this paper studied the phase optimization of pseudorandom signals. We tried two methods including the random phase combination and time-frequency domain swapping algorithm to solve this problem, and used the crest factor to estimate the effect of optimization. Furthermore, in order to make the pseudorandom signals met the requirement of the respiratory system identification in 4-40 Hz, we compensated the input signals' amplitudes at the low frequency band (4-18 Hz) according to the frequency-response curve of the oscillation unit. Resuts showed that time-frequency domain swapping algorithm could effectively optimize the phase combination of pseudorandom signals. Moreover, when the amplitudes at low frequencies were compensated, the expected stimulus signals which met the performance requirements were obtained eventually.
Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.
Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie
2016-06-01
Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.
NASA Astrophysics Data System (ADS)
Jin, Ye; Yang, Yang; Zhang, Du; Peng, Degao; Yang, Weitao
2017-10-01
The optimized effective potential (OEP) that gives accurate Kohn-Sham (KS) orbitals and orbital energies can be obtained from a given reference electron density. These OEP-KS orbitals and orbital energies are used here for calculating electronic excited states with the particle-particle random phase approximation (pp-RPA). Our calculations allow the examination of pp-RPA excitation energies with the exact KS density functional theory (DFT). Various input densities are investigated. Specifically, the excitation energies using the OEP with the electron densities from the coupled-cluster singles and doubles method display the lowest mean absolute error from the reference data for the low-lying excited states. This study probes into the theoretical limit of the pp-RPA excitation energies with the exact KS-DFT orbitals and orbital energies. We believe that higher-order correlation contributions beyond the pp-RPA bare Coulomb kernel are needed in order to achieve even higher accuracy in excitation energy calculations.
Landry, Alicia; Madson, Michael; Thomson, Jessica; Zoellner, Jamie; Connell, Carol; Yadrick, Kathleen
2015-01-01
Little is known about the effective dose of motivational interviewing for maintaining intervention-induced health outcome improvements. The purpose of this study was to compare effects of two doses of motivational interviewing for maintaining blood pressure improvements in a community-engaged lifestyle intervention conducted with African-Americans. Participants were tracked through a 12-month maintenance phase following a 6-month intervention targeting physical activity and diet. For the maintenance phase, participants were randomized to receive a low (4) or high (10) dose of motivational interviewing delivered via telephone by trained research staff. Generalized linear models were used to test for group differences in blood pressure. Blood pressure significantly increased during the maintenance phase. No differences were apparent between randomized groups. Results suggest that 10 or fewer motivational interviewing calls over a 12-month period may be insufficient to maintain post-intervention improvements in blood pressure. Further research is needed to determine optimal strategies for maintaining changes. PMID:26590242
Phase retrieval in generalized optical interferometry systems.
Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick
2018-02-05
Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.
NASA Astrophysics Data System (ADS)
Paek, Seung Weon; Kang, Jae Hyun; Ha, Naya; Kim, Byung-Moo; Jang, Dae-Hyun; Jeon, Junsu; Kim, DaeWook; Chung, Kun Young; Yu, Sung-eun; Park, Joo Hyun; Bae, SangMin; Song, DongSup; Noh, WooYoung; Kim, YoungDuck; Song, HyunSeok; Choi, HungBok; Kim, Kee Sup; Choi, Kyu-Myung; Choi, Woonhyuk; Jeon, JoongWon; Lee, JinWoo; Kim, Ki-Su; Park, SeongHo; Chung, No-Young; Lee, KangDuck; Hong, YoungKi; Kim, BongSeok
2012-03-01
A set of design for manufacturing (DFM) techniques have been developed and applied to 45nm, 32nm and 28nm logic process technologies. A noble technology combined a number of potential confliction of DFM techniques into a comprehensive solution. These techniques work in three phases for design optimization and one phase for silicon diagnostics. In the DFM prevention phase, foundation IP such as standard cells, IO, and memory and P&R tech file are optimized. In the DFM solution phase, which happens during ECO step, auto fixing of process weak patterns and advanced RC extraction are performed. In the DFM polishing phase, post-layout tuning is done to improve manufacturability. DFM analysis enables prioritization of random and systematic failures. The DFM technique presented in this paper has been silicon-proven with three successful tape-outs in Samsung 32nm processes; about 5% improvement in yield was achieved without any notable side effects. Visual inspection of silicon also confirmed the positive effect of the DFM techniques.
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
Statistical mechanics of budget-constrained auctions
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
Fallon, Marie T; Albert Lux, Eberhard; McQuade, Robert; Rossetti, Sandro; Sanchez, Raymond; Sun, Wei; Wright, Stephen; Lichtman, Aron H; Kornyeyeva, Elena
2017-08-01
Opioids are critical for managing cancer pain, but may provide inadequate relief and/or unacceptable side effects in some cases. To assess the analgesic efficacy of adjunctive Sativex (Δ 9 -tetrahydrocannabinol (27 mg/mL): cannabidiol (25 mg/mL)) in advanced cancer patients with chronic pain unalleviated by optimized opioid therapy. This report describes two phase 3, double-blind, randomized, placebo-controlled trials. Eligible patients had advanced cancer and average pain numerical rating scale (NRS) scores ≥4 and ≤8 at baseline, despite optimized opioid therapy. In Study-1, patients were randomized to Sativex or placebo, and then self-titrated study medications over a 2-week period per effect and tolerability, followed by a 3-week treatment period. In Study-2, all patients self-titrated Sativex over a 2-week period. Patients with a ≥15% improvement from baseline in pain score were then randomized 1:1 to Sativex or placebo, followed by 5-week treatment period (randomized withdrawal design). The primary efficacy endpoint (percent improvement (Study-1) and mean change (Study-2) in average daily pain NRS scores) was not met in either study. Post hoc analyses of the primary endpoints identified statistically favourable treatment effect for Sativex in US patients <65 years (median treatment difference: 8.8; 95% confidence interval (CI): 0.00-17.95; p = 0.040) that was not observed in patients <65 years from the rest of the world (median treatment difference: 0.2; 95% CI: -5.00 to 7.74; p = 0.794). Treatment effect in favour of Sativex was observed on quality-of-life questionnaires, despite the fact that similar effects were not observed on NRS score. The safety profile of Sativex was consistent with earlier studies, and no evidence of abuse or misuse was identified. Sativex did not demonstrate superiority to placebo in reducing self-reported pain NRS scores in advanced cancer patients with chronic pain unalleviated by optimized opioid therapy, although further exploration of differences between United States and patients from the rest of the world is warranted.
GENOPT 2016: Design of a generalization-based challenge in global optimization
NASA Astrophysics Data System (ADS)
Battiti, Roberto; Sergeyev, Yaroslav; Brunato, Mauro; Kvasov, Dmitri
2016-10-01
While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. To avoid this negative effect, the GENOPT contest benchmarks can be used which are based on randomized function generators, designed for scientific experiments, with fixed statistical characteristics but individual variation of the generated instances. The generators are available to participants for off-line tests and online tuning schemes, but the final competition is based on random seeds communicated in the last phase through a cooperative process. A brief presentation and discussion of the methods and results obtained in the framework of the GENOPT contest are given in this contribution.
Statistical Mechanics of Combinatorial Auctions
NASA Astrophysics Data System (ADS)
Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo
2006-09-01
Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.
Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella
2017-04-01
Measuring toxicity is an important step in drug development. Nevertheless, the current experimental methods used to estimate the drug toxicity are expensive and time-consuming, indicating that they are not suitable for large-scale evaluation of drug toxicity in the early stage of drug development. Hence, there is a high demand to develop computational models that can predict the drug toxicity risks. In this study, we used a dataset that consists of 553 drugs that biotransformed in liver. The toxic effects were calculated for the current data, namely, mutagenic, tumorigenic, irritant and reproductive effect. Each drug is represented by 31 chemical descriptors (features). The proposed model consists of three phases. In the first phase, the most discriminative subset of features is selected using rough set-based methods to reduce the classification time while improving the classification performance. In the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique (SMOTE), BorderLine SMOTE and Safe Level SMOTE are used to solve the problem of imbalanced dataset. In the third phase, the Support Vector Machines (SVM) classifier is used to classify an unknown drug into toxic or non-toxic. SVM parameters such as the penalty parameter and kernel parameter have a great impact on the classification accuracy of the model. In this paper, Whale Optimization Algorithm (WOA) has been proposed to optimize the parameters of SVM, so that the classification error can be reduced. The experimental results proved that the proposed model achieved high sensitivity to all toxic effects. Overall, the high sensitivity of the WOA+SVM model indicates that it could be used for the prediction of drug toxicity in the early stage of drug development. Copyright © 2017 Elsevier Inc. All rights reserved.
Experimental evaluation of fingerprint verification system based on double random phase encoding
NASA Astrophysics Data System (ADS)
Suzuki, Hiroyuki; Yamaguchi, Masahiro; Yachida, Masuyoshi; Ohyama, Nagaaki; Tashima, Hideaki; Obi, Takashi
2006-03-01
We proposed a smart card holder authentication system that combines fingerprint verification with PIN verification by applying a double random phase encoding scheme. In this system, the probability of accurate verification of an authorized individual reduces when the fingerprint is shifted significantly. In this paper, a review of the proposed system is presented and preprocessing for improving the false rejection rate is proposed. In the proposed method, the position difference between two fingerprint images is estimated by using an optimized template for core detection. When the estimated difference exceeds the permissible level, the user inputs the fingerprint again. The effectiveness of the proposed method is confirmed by a computational experiment; its results show that the false rejection rate is improved.
Robust Dynamic Multi-objective Vehicle Routing Optimization Method.
Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei
2017-03-21
For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.
Ramasubbu, Rajamannar; Anderson, Susan; Haffenden, Angela; Chavda, Swati; Kiss, Zelma H T
2013-09-01
Deep brain stimulation (DBS) of the subcallosal cingulate (SCC) is reported to be a safe and effective new treatment for treatment-resistant depression (TRD). However, the optimal electrical stimulation parameters are unknown and generally selected by trial and error. This pilot study investigated the relationship between stimulus parameters and clinical effects in SCC-DBS treatment for TRD. Four patients with TRD underwent SCC-DBS surgery. In a double-blind stimulus optimization phase, frequency and pulse widths were randomly altered weekly, and corresponding changes in mood and depression were evaluated using a visual analogue scale (VAS) and the 17-item Hamilton Rating Scale for Depression (HAM-D-17). In the open-label postoptimization phase, depressive symptoms were evaluated biweekly for 6 months to determine long-term clinical outcomes. Longer pulse widths (270-450 μs) were associated with reductions in HAM-D-17 scores in 3 patients and maximal happy mood VAS responses in all 4 patients. Only 1 patient showed acute clinical or mood effects from changing the stimulation frequency. After 6 months of open-label therapy, 2 patients responded and 1 patient partially responded. Limitations include small sample size, weekly changes in stimulus parameters, and fixed-order and carry-forward effects. Longer pulse width stimulation may have a role in stimulus optimization for SCC-DBS in TRD. Longer pulse durations produce larger apparent current spread, suggesting that we do not yet know the optimal target or stimulus parameters for this therapy. Investigations using different stimulus parameters are required before embarking on large-scale randomized sham-controlled trials.
NASA Astrophysics Data System (ADS)
Pahlavani, M. R.; Firoozi, B.
2016-09-01
γ-ray transitions from excited states of {}16{{N}} and {}16{{O}} isomers that appear in the γ spectrum of the {}616{{{C}}}10\\to {}716{{{N}}}9\\to {}816{{{O}}}8 beta decay chain are investigated. The theoretical approach used in this research starts with a mean-field potential consisting of a phenomenological Woods-Saxon potential including spin-orbit and Coulomb terms (for protons) in order to obtain single-particle energies and wave functions for nucleons in a nucleus. A schematic residual surface delta interaction is then employed on the top of the mean field and is treated within the proton-neutron Tamm-Dancoff approximation (pnTDA) and the proton-neutron random phase approximation. The goal is to use an optimized surface delta interaction interaction, as a residual interaction, to improve the results. We have used artificial intelligence algorithms to establish a good agreement between theoretical and experimental energy spectra. The final results of the ‘optimized’ calculations are reasonable via this approach.
NASA Astrophysics Data System (ADS)
Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong
2015-10-01
We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.
Visibility-Based Hypothesis Testing Using Higher-Order Optical Interference
NASA Astrophysics Data System (ADS)
Jachura, Michał; Jarzyna, Marcin; Lipka, Michał; Wasilewski, Wojciech; Banaszek, Konrad
2018-03-01
Many quantum information protocols rely on optical interference to compare data sets with efficiency or security unattainable by classical means. Standard implementations exploit first-order coherence between signals whose preparation requires a shared phase reference. Here, we analyze and experimentally demonstrate the binary discrimination of visibility hypotheses based on higher-order interference for optical signals with a random relative phase. This provides a robust protocol implementation primitive when a phase lock is unavailable or impractical. With the primitive cost quantified by the total detected optical energy, optimal operation is typically reached in the few-photon regime.
Kaufman, Howard L; Bines, Steven D
2010-06-01
There are few effective treatment options available for patients with advanced melanoma. An oncolytic herpes simplex virus type 1 encoding granulocyte macrophage colony-stimulating factor (GM-CSF; Oncovex(GM-CSF)) for direct injection into accessible melanoma lesions resulted in a 28% objective response rate in a Phase II clinical trial. Responding patients demonstrated regression of both injected and noninjected lesions highlighting the dual mechanism of action of Oncovex(GM-CSF) that includes both a direct oncolytic effect in injected tumors and a secondary immune-mediated anti-tumor effect on noninjected tumors. Based on these preliminary results a prospective, randomized Phase III clinical trial in patients with unresectable Stage IIIb or c and Stage IV melanoma has been initiated. The rationale, study design, end points and future development of the Oncovex(GM-CSF) Pivotal Trial in Melanoma (OPTIM) trial are discussed in this article.
Are genetically robust regulatory networks dynamically different from random ones?
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Rikvold, Per Arne
We study a genetic regulatory network model developed to demonstrate that genetic robustness can evolve through stabilizing selection for optimal phenotypes. We report preliminary results on whether such selection could result in a reorganization of the state space of the system. For the chosen parameters, the evolution moves the system slightly toward the more ordered part of the phase diagram. We also find that strong memory effects cause the Derrida annealed approximation to give erroneous predictions about the model's phase diagram.
NASA Astrophysics Data System (ADS)
Zhuang, Yufei; Huang, Haibin
2014-02-01
A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.
Bayesian Phase II optimization for time-to-event data based on historical information.
Bertsche, Anja; Fleischer, Frank; Beyersmann, Jan; Nehmiz, Gerhard
2017-01-01
After exploratory drug development, companies face the decision whether to initiate confirmatory trials based on limited efficacy information. This proof-of-concept decision is typically performed after a Phase II trial studying a novel treatment versus either placebo or an active comparator. The article aims to optimize the design of such a proof-of-concept trial with respect to decision making. We incorporate historical information and develop pre-specified decision criteria accounting for the uncertainty of the observed treatment effect. We optimize these criteria based on sensitivity and specificity, given the historical information. Specifically, time-to-event data are considered in a randomized 2-arm trial with additional prior information on the control treatment. The proof-of-concept criterion uses treatment effect size, rather than significance. Criteria are defined on the posterior distribution of the hazard ratio given the Phase II data and the historical control information. Event times are exponentially modeled within groups, allowing for group-specific conjugate prior-to-posterior calculation. While a non-informative prior is placed on the investigational treatment, the control prior is constructed via the meta-analytic-predictive approach. The design parameters including sample size and allocation ratio are then optimized, maximizing the probability of taking the right decision. The approach is illustrated with an example in lung cancer.
Liu, Wei; Schild, Steven E.; Chang, Joe Y.; Liao, Zhongxing; Chang, Yu-Hui; Wen, Zhifei; Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle; Sahoo, Narayan; Herman, Michael G.; Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin
2015-01-01
Background To compare the impact of uncertainties and interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods IMPT plans were created for 11 non-randomly selected non-small-cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D CTs to irradiate clinical target volume (CTV). Regular fractionation (66 Gy[RBE] in 33 fractions) were considered. In 4D optimization, the CTV of individual phases received non-uniform doses to achieve a uniform cumulative dose. The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed-rank test. Results 4D robust optimization plans led to smaller AUC for CTV (14.26 vs. 18.61 (p=0.001), better CTV coverage (Gy[RBE]) [D95% CTV: 60.6 vs 55.2 (p=0.001)], and better CTV homogeneity [D5%–D95% CTV: 10.3 vs 17.7 (p=0.002)] in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage [D95% CTV: 64.5 vs 63.8 (p=0.0068)], comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions. PMID:26725727
Childress, Ann C; Wigal, Sharon B; Brams, Matthew N; Turnbow, John M; Pincus, Yulia; Belden, Heidi W; Berry, Sally A
2018-06-01
To determine the efficacy and safety of amphetamine extended-release oral suspension (AMPH EROS) in the treatment of attention-deficit/hyperactivity disorder (ADHD) in a dose-optimized, randomized, double-blind, parallel-group study. Boys and girls aged 6 to 12 years diagnosed with ADHD were enrolled. During a 5-week, open-label, dose-optimization phase, patients began treatment with 2.5 or 5 mg/day of AMPH EROS; doses were titrated until an optimal dose (maximum 20 mg/day) was reached. During the double-blind phase, patients were randomized to receive treatment with either their optimized dose (10-20 mg/day) of AMPH EROS or placebo for 1 week. Efficacy was assessed in a laboratory classroom setting on the final day of double-blind treatment using the Swanson, Kotkin, Agler, M-Flynn, and Pelham (SKAMP) Rating Scale and Permanent Product Measure of Performance (PERMP) test. Safety was assessed measuring adverse events (AEs) and vital signs. The study was completed by 99 patients. The primary efficacy endpoint (change from predose SKAMP-Combined score at 4 hours postdose) and secondary endpoints (change from predose SKAMP-Combined scores at 1, 2, 6, 8, 10, 12, and 13 hours postdose) were statistically significantly improved with AMPH EROS treatment versus placebo at all time points. Onset of treatment effect was present by 1 hour postdosing, the first time point measured, and duration of efficacy lasted 13 hours postdosing. PERMP data mirrored the SKAMP-Combined score data. AEs (>5%) reported during dose optimization were decreased appetite, insomnia, affect lability, upper abdominal pain, mood swings, and headache. AMPH EROS was effective in reducing symptoms of ADHD and had a rapid onset and extended duration of effect. Reported AEs were consistent with those of other extended-release amphetamine products.
Seymour, Lesley; Ivy, S. Percy; Sargent, Daniel; Spriggs, David; Baker, Laurence; Rubinstein, Larry; Ratain, Mark J; Le Blanc, Michael; Stewart, David; Crowley, John; Groshen, Susan; Humphrey, Jeffrey S; West, Pamela; Berry, Donald
2010-01-01
The optimal design of phase II studies continues to be the subject of vigorous debate, especially with regards to studies of newer molecularly targeted agents. The observations that many new therapeutics ‘fail’ in definitive phase III studies, coupled with the numbers of new agents to be tested as well as the increasing costs and complexity of clinical trials further emphasizes the critical importance of robust and efficient phase II design. The Clinical Trial Design Task Force(CTD-TF)of the NCI Investigational Drug Steering Committee (IDSC) has published a series of discussion papers on Phase II trial design in Clinical Cancer Research. The IDSC has developed formal recommendations regarding aspects of phase II trial design which are the subject of frequent debate such as endpoints(response vs. progression free survival), randomization(single arm designs vs. randomization), inclusion of biomarkers, biomarker based patient enrichment strategies, and statistical design(e.g. two stage designs vs. multiple-group adaptive designs). While these recommendations in general encourage the use of progression-free survival as the primary endpoint, the use of randomization, the inclusion of biomarkers and the incorporation of newer designs, we acknowledge that objective response as an endpoint, and single arm designs, remain relevant in certain situations. The design of any clinical trial should always be carefully evaluated and justified based on the characteristic specific to the situation. PMID:20215557
Efficient image projection by Fourier electroholography.
Makowski, Michał; Ducin, Izabela; Kakarenko, Karol; Kolodziejczyk, Andrzej; Siemion, Agnieszka; Siemion, Andrzej; Suszek, Jaroslaw; Sypek, Maciej; Wojnowski, Dariusz
2011-08-15
An improved efficient projection of color images is presented. It uses a phase spatial light modulator with three iteratively optimized Fourier holograms displayed simultaneously--each for one primary color. This spatial division instead of time division provides stable images. A pixelated structure of the modulator and fluctuations of liquid crystal molecules cause a zeroth-order peak, eliminated by additional wavelength-dependent phase factors shifting it before the image plane, where it is blocked with a matched filter. Speckles are suppressed by time integration of variable speckle patterns generated by additional randomizations of an initial phase and minor changes of the signal. © 2011 Optical Society of America
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
SPIRIT: A seamless phase I/II randomized design for immunotherapy trials.
Guo, Beibei; Li, Daniel; Yuan, Ying
2018-06-07
Immunotherapy-treatments that enlist the immune system to battle tumors-has received widespread attention in cancer research. Due to its unique features and mechanisms for treating cancer, immunotherapy requires novel clinical trial designs. We propose a Bayesian seamless phase I/II randomized design for immunotherapy trials (SPIRIT) to find the optimal biological dose (OBD) defined in terms of the restricted mean survival time. We jointly model progression-free survival and the immune response. Progression-free survival is used as the primary endpoint to determine the OBD, and the immune response is used as an ancillary endpoint to quickly screen out futile doses. Toxicity is monitored throughout the trial. The design consists of two seamlessly connected stages. The first stage identifies a set of safe doses. The second stage adaptively randomizes patients to the safe doses identified and uses their progression-free survival and immune response to find the OBD. The simulation study shows that the SPIRIT has desirable operating characteristics and outperforms the conventional design. Copyright © 2018 John Wiley & Sons, Ltd.
Delrieu, Isabelle; Leboulleux, Didier; Ivinson, Karen; Gessner, Bradford D
2015-03-24
Vaccines interrupting Plasmodium falciparum malaria transmission targeting sexual, sporogonic, or mosquito-stage antigens (SSM-VIMT) are currently under development to reduce malaria transmission. An international group of malaria experts was established to evaluate the feasibility and optimal design of a Phase III cluster randomized trial (CRT) that could support regulatory review and approval of an SSM-VIMT. The consensus design is a CRT with a sentinel population randomly selected from defined inner and buffer zones in each cluster, a cluster size sufficient to assess true vaccine efficacy in the inner zone, and inclusion of ongoing assessment of vaccine impact stratified by distance of residence from the cluster edge. Trials should be conducted first in areas of moderate transmission, where SSM-VIMT impact should be greatest. Sample size estimates suggest that such a trial is feasible, and within the range of previously supported trials of malaria interventions, although substantial issues to implementation exist. Copyright © 2015 Elsevier Ltd. All rights reserved.
Progress in low-resolution ab initio phasing with CrowdPhase
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.
2016-03-01
Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less
Progress in low-resolution ab initio phasing with CrowdPhase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.
Ab initio phasing by direct computational methods in low-resolution X-ray crystallography is a long-standing challenge. A common approach is to consider it as two subproblems: sampling of phase space and identification of the correct solution. While the former is amenable to a myriad of search algorithms, devising a reliable target function for the latter problem remains an open question. Here, recent developments in CrowdPhase, a collaborative online game powered by a genetic algorithm that evolves an initial population of individuals with random genetic make-up ( i.e. random phases) each expressing a phenotype in the form of an electron-density map, aremore » presented. Success relies on the ability of human players to visually evaluate the quality of these maps and, following a Darwinian survival-of-the-fittest concept, direct the search towards optimal solutions. While an initial study demonstrated the feasibility of the approach, some important crystallographic issues were overlooked for the sake of simplicity. To address these, the new CrowdPhase includes consideration of space-group symmetry, a method for handling missing amplitudes, the use of a map correlation coefficient as a quality metric and a solvent-flattening step. Lastly, performances of this installment are discussed for two low-resolution test cases based on bona fide diffraction data.« less
Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions
NASA Astrophysics Data System (ADS)
Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob
2012-01-01
For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.
Hybrid feature selection for supporting lightweight intrusion detection systems
NASA Astrophysics Data System (ADS)
Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin
2017-08-01
Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.
Pasiakos, Stefan M; Berryman, Claire E; Karl, J Philip; Lieberman, Harris R; Orr, Jeb S; Margolis, Lee M; Caldwell, John A; Young, Andrew J; Montano, Monty A; Evans, William J; Vartanian, Oshin; Carmichael, Owen T; Gadde, Kishore M; Harris, Melissa; Rood, Jennifer C
2017-07-01
The physiological consequences of severe energy deficit include hypogonadism and the loss of fat-free mass. Prolonged energy deficit also impacts physical performance, mood, attentiveness, and decision-making capabilities. This study will determine whether maintaining a eugonadal state during severe, sustained energy deficit attenuates physiological decrements and maintains mental performance. This study will also assess the effects of normalizing testosterone levels during severe energy deficit and recovery on gut health and appetite regulation. Fifty physically active men will participate in a 3-phase, randomized, placebo-controlled study. After completing a 14-d, energy-adequate, diet acclimation phase (protein: 1.6g∙kg -1 ∙d -1 ; fat: 30% total energy intake), participants will be randomized to undergo a 28-d, 55% energy deficit phase with (DEF+TEST: 200mg testosterone enanthate per week) or without (DEF) exogenous testosterone. Diet and physical activity will be rigorously controlled. Recovery from the energy deficit (ad libitum diet, no testosterone) will be assessed until body mass has been recovered within ±2.5% of initial body mass. Body composition, stable isotope methodologies, proteomics, muscle biopsies, whole-room calorimetry, molecular biology, activity/sleep monitoring, personality and cognitive function assessments, functional MRI, and comprehensive biochemistries will be used to assess physiological and psychological responses to energy restriction and recovery feeding while volunteers are in an expected hypogonadal versus eugonadal state. The Optimizing Performance for Soldiers (OPS) study aims to determine whether preventing hypogonadism will mitigate declines in physical and mental function that typically occur during prolonged energy deficit, and the efficacy of testosterone replacement on recovery from severe underfeeding. NCT02734238. Copyright © 2017. Published by Elsevier Inc.
Vallila-Rohter, Sofia; Kiran, Swathi
2015-08-01
Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive-linguistic assessments. Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities.
de Azambuja, Evandro; Bradbury, Ian; Saini, Kamal S.; Bines, José; Simon, Sergio D.; Dooren, Veerle Van; Aktan, Gursel; Pritchard, Kathleen I.; Wolff, Antonio C.; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances; Xu, Binghe; Baselga, Jose; Perez, Edith A.; Piccart-Gebhart, Martine
2013-01-01
Purpose. This study measured the time taken for setting up the different facets of Adjuvant Lapatinib and/or Trastuzumab Treatment Optimization (ALTTO), an international phase III study being conducted in 44 participating countries. Methods. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected in the ALTTO study. Analyses were conducted by grouping countries into either geographic regions or economic classes as per the World Bank's criteria. Results. South America had a significantly longer time to RA approval (median: 236 days, range: 21–257 days) than Europe (median: 52 days, range: 0–151 days), North America (median: 26 days, range: 22–30 days), and Asia-Pacific (median: 62 days, range: 37–75 days). Upper-middle economies had longer times to RA approval (median: 123 days, range: 21–257 days) than high-income (median: 47 days, range: 0–112 days) and lower-middle income economies (median: 57 days, range: 37–62 days). No significant difference was observed for time to EC/IRB approval across the studied regions (median: 59 days, range 0–174 days). Overall, the median time from EC/IRB approval to first recruited patient was 169 days (range: 26–412 days). Conclusion. This study highlights the long time intervals required to activate a global phase III trial. Collaborative research groups, pharmaceutical industry sponsors, and regulatory authorities should analyze the current system and enter into dialogue for optimizing local policies. This would enable faster access of patients to innovative therapies and enhance the efficiency of clinical research. PMID:23359433
Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I
2017-09-08
In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy within the context of limited resources. While the design is general enough to apply to many situations, future work is needed to address interim analyses and the incorporation of models for dose response.
Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva
2017-03-01
In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Efficient 3D porous microstructure reconstruction via Gaussian random field and hybrid optimization.
Jiang, Z; Chen, W; Burkhart, C
2013-11-01
Obtaining an accurate three-dimensional (3D) structure of a porous microstructure is important for assessing the material properties based on finite element analysis. Whereas directly obtaining 3D images of the microstructure is impractical under many circumstances, two sets of methods have been developed in literature to generate (reconstruct) 3D microstructure from its 2D images: one characterizes the microstructure based on certain statistical descriptors, typically two-point correlation function and cluster correlation function, and then performs an optimization process to build a 3D structure that matches those statistical descriptors; the other method models the microstructure using stochastic models like a Gaussian random field and generates a 3D structure directly from the function. The former obtains a relatively accurate 3D microstructure, but computationally the optimization process can be very intensive, especially for problems with large image size; the latter generates a 3D microstructure quickly but sacrifices the accuracy due to issues in numerical implementations. A hybrid optimization approach of modelling the 3D porous microstructure of random isotropic two-phase materials is proposed in this paper, which combines the two sets of methods and hence maintains the accuracy of the correlation-based method with improved efficiency. The proposed technique is verified for 3D reconstructions based on silica polymer composite images with different volume fractions. A comparison of the reconstructed microstructures and the optimization histories for both the original correlation-based method and our hybrid approach demonstrates the improved efficiency of the approach. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
George, Duncan; Gálvez, Verònica; Martin, Donel; Kumar, Divya; Leyden, John; Hadzi-Pavlovic, Dusan; Harper, Simon; Brodaty, Henry; Glue, Paul; Taylor, Rohan; Mitchell, Philip B; Loo, Colleen K
2017-11-01
To assess the efficacy and safety of subcutaneous ketamine for geriatric treatment-resistant depression. Secondary aims were to examine if repeated treatments were safe and more effective in inducing or prolonging remission than a single treatment. In this double-blind, controlled, multiple-crossover study with a 6-month follow-up (randomized controlled trial [RCT] phase), 16 participants (≥60 years) with treatment-resistant depression who relapsed after remission or did not remit in the RCT were administered an open-label phase. Up to five subcutaneous doses of ketamine (0.1, 0.2, 0.3, 0.4, and 0.5 mg/kg) were administered in separate sessions (≥1 week apart), with one active control (midazolam) randomly inserted (RCT phase). Twelve ketamine treatments were given in the open-label phase. Mood, hemodynamic, and psychotomimetic outcomes were assessed by blinded raters. Remitters in each phase were followed for 6 months. Seven of 14 RCT-phase completers remitted with ketamine treatment. Five remitted at doses below 0.5 mg/kg. Doses ≥ 0.2 mg/kg were significantly more effective than midazolam. Ketamine was well tolerated. Repeated treatments resulted in higher likelihood of remission or longer time to relapse. Results provide preliminary evidence for the efficacy and safety of ketamine in treating elderly depressed. Dose titration is recommended for optimizing antidepressant and safety outcomes on an individual basis. Subcutaneous injection is a practical method for giving ketamine. Repeated treatments may improve remission rates (clinicaltrials.gov; NCT01441505). Copyright © 2017 American Association for Geriatric Psychiatry. All rights reserved.
Mösges, Ralph; Rohdenburg, Christina; Eichel, Andrea; Zadoyan, Gregor; Kasche, Elena-Manja; Shah-Hosseini, Kija; Lehmacher, Walter; Schmalz, Petra; Compalati, Enrico
2017-11-01
To determine the optimal effective and safe dose of sublingual immunotherapy tablets containing carbamylated monomeric allergoids in patients with grass pollen-induced allergic rhinoconjunctivitis. In this prospective, randomized, double-blind, active-controlled, multicenter, Phase II study, four different daily doses were applied preseasonally for 12 weeks. Of 158 randomized adults, 155 subjects (safety population) received 300 units of allergy (UA)/day (n = 36), 600 UA/day (n = 43), 1000 UA/day (n = 39), or 2000 UA/day (n = 37). After treatment, 54.3, 47.6, 59.0 and 51.4% of patients, respectively, ceased to react to the highest allergen concentration in a conjunctival provocation test. Furthermore, the response threshold improved in 70.4, 62.9, 76.7 and 66.7% of patients, respectively. No serious adverse events occurred. This study found 1000 UA/day to be the optimal effective and safe dose.
Molenaar, Heike; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Robust phenotypic data allow adequate statistical analysis and are crucial for any breeding purpose. Such data is obtained from experiments laid out to best control local variation. Additionally, experiments frequently involve two phases, each contributing environmental sources of variation. For example, in a former experiment we conducted to evaluate production related traits in Pelargonium zonale , there were two consecutive phases, each performed in a different greenhouse. Phase one involved the propagation of the breeding strains to obtain the stem cutting count, and phase two involved the assessment of root formation. The evaluation of the former study raised questions regarding options for improving the experimental layout: (i) Is there a disadvantage to using exactly the same design in both phases? (ii) Instead of generating a separate layout for each phase, can the design be optimized across both phases, such that the mean variance of a pair-wise treatment difference (MVD) can be decreased? To answer these questions, alternative approaches were explored to generate two-phase designs either in phase-wise order (Option 1) or across phases (Option 2). In Option 1 we considered the scenarios (i) using in both phases the same experimental design and (ii) randomizing each phase separately. In Option 2, we considered the scenarios (iii) generating a single design with eight replicates and splitting these among the two phases, (iv) separating the block structure across phases by dummy coding, and (v) design generation with optimal alignment of block units in the two phases. In both options, we considered the same or different block structures in each phase. The designs were evaluated by the MVD obtained by the intra-block analysis and the joint inter-block-intra-block analysis. The smallest MVD was most frequently obtained for designs generated across phases rather than for each phase separately, in particular when both phases of the design were separated with a single pseudo-level. The joint optimization ensured that treatment concurrences were equally balanced across pairs, one of the prerequisites for an efficient design. The proposed alternative approaches can be implemented with any model-based design packages with facilities to formulate linear models for treatment and block structures.
From the physics of interacting polymers to optimizing routes on the London Underground
Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael
2013-01-01
Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198
From the physics of interacting polymers to optimizing routes on the London Underground.
Yeung, Chi Ho; Saad, David; Wong, K Y Michael
2013-08-20
Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.
Topology-optimized metasurfaces: impact of initial geometric layout.
Yang, Jianji; Fan, Jonathan A
2017-08-15
Topology optimization is a powerful iterative inverse design technique in metasurface engineering and can transform an initial layout into a high-performance device. With this method, devices are optimized within a local design phase space, making the identification of suitable initial geometries essential. In this Letter, we examine the impact of initial geometric layout on the performance of large-angle (75 deg) topology-optimized metagrating deflectors. We find that when conventional metasurface designs based on dielectric nanoposts are used as initial layouts for topology optimization, the final devices have efficiencies around 65%. In contrast, when random initial layouts are used, the final devices have ultra-high efficiencies that can reach 94%. Our numerical experiments suggest that device topologies based on conventional metasurface designs may not be suitable to produce ultra-high-efficiency, large-angle metasurfaces. Rather, initial geometric layouts with non-trivial topologies and shapes are required.
A comparative study of clock rate and drift estimation
NASA Technical Reports Server (NTRS)
Breakiron, Lee A.
1994-01-01
Five different methods of drift determination and four different methods of rate determination were compared using months of hourly phase and frequency data from a sample of cesium clocks and active hydrogen masers. Linear least squares on frequency is selected as the optimal method of determining both drift and rate, more on the basis of parameter parsimony and confidence measures than on random and systematic errors.
Cooperation in the noisy case: Prisoner's dilemma game on two types of regular random graphs
NASA Astrophysics Data System (ADS)
Vukov, Jeromos; Szabó, György; Szolnoki, Attila
2006-06-01
We have studied an evolutionary prisoner’s dilemma game with players located on two types of random regular graphs with a degree of 4. The analysis is focused on the effects of payoffs and noise (temperature) on the maintenance of cooperation. When varying the noise level and/or the highest payoff, the system exhibits a second-order phase transition from a mixed state of cooperators and defectors to an absorbing state where only defectors remain alive. For the random regular graph (and Bethe lattice) the behavior of the system is similar to those found previously on the square lattice with nearest neighbor interactions, although the measure of cooperation is enhanced by the absence of loops in the connectivity structure. For low noise the optimal connectivity structure is built up from randomly connected triangles.
Pain-Relieving Interventions for Retinopathy of Prematurity: A Meta-analysis.
Disher, Timothy; Cameron, Chris; Mitra, Souvik; Cathcart, Kelcey; Campbell-Yeo, Marsha
2018-06-01
Retinopathy of prematurity eye examinations conducted in the neonatal intensive care. To combine randomized trials of pain-relieving interventions for retinopathy of prematurity examinations using network meta-analysis. Systematic review and network meta-analysis of Medline, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and the World Health Organization International Clinical Trials Registry Platform. All databases were searched from inception to February 2017. Abstract and title screen and full-text screening were conducted independently by 2 reviewers. Data were extracted by 2 reviewers and pooled with random effect models if the number of trials within a comparison was sufficient. The primary outcome was pain during the examination period; secondary outcomes were pain after the examination, physiologic response, and adverse events. Twenty-nine studies ( N = 1487) were included. Topical anesthetic (TA) combined with sweet taste and an adjunct intervention (eg, nonnutritive sucking) had the highest probability of being the optimal treatment (mean difference [95% credible interval] versus TA alone = -3.67 [-5.86 to -1.47]; surface under the cumulative ranking curve = 0.86). Secondary outcomes were sparsely reported (2-4 studies, N = 90-248) but supported sweet-tasting solutions with or without adjunct interventions as optimal. Limitations included moderate heterogeneity in pain assessment reactivity phase and severe heterogeneity in the regulation phase. Multisensory interventions including sweet taste is likely the optimal treatment for reducing pain resulting from eye examinations in preterm infants. No interventions were effective in absolute terms. Copyright © 2018 by the American Academy of Pediatrics.
Optimal resource diffusion for suppressing disease spreading in multiplex networks
NASA Astrophysics Data System (ADS)
Chen, Xiaolong; Wang, Wei; Cai, Shimin; Stanley, H. Eugene; Braunstein, Lidia A.
2018-05-01
Resource diffusion is a ubiquitous phenomenon, but how it impacts epidemic spreading has received little study. We propose a model that couples epidemic spreading and resource diffusion in multiplex networks. The spread of disease in a physical contact layer and the recovery of the infected nodes are both strongly dependent upon resources supplied by their counterparts in the social layer. The generation and diffusion of resources in the social layer are in turn strongly dependent upon the state of the nodes in the physical contact layer. Resources diffuse preferentially or randomly in this model. To quantify the degree of preferential diffusion, a bias parameter that controls the resource diffusion is proposed. We conduct extensive simulations and find that the preferential resource diffusion can change phase transition type of the fraction of infected nodes. When the degree of interlayer correlation is below a critical value, increasing the bias parameter changes the phase transition from double continuous to single continuous. When the degree of interlayer correlation is above a critical value, the phase transition changes from multiple continuous to first discontinuous and then to hybrid. We find hysteresis loops in the phase transition. We also find that there is an optimal resource strategy at each fixed degree of interlayer correlation under which the threshold reaches a maximum and the disease can be maximally suppressed. In addition, the optimal controlling parameter increases as the degree of inter-layer correlation increases.
A random walk approach to quantum algorithms.
Kendon, Vivien M
2006-12-15
The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.
DuBois, Janet; Dover, Jeffrey S; Jones, Terry M; Weiss, Robert A; Berk, David R; Ahluwalia, Gurpreet
2018-03-01
Rosacea is a chronic dermatologic condition with limited treatment options. This phase 2 study evaluated the optimal oxymetazoline dosing regimen in patients with moderate to severe persistent facial erythema of rosacea. Patients were randomly assigned to oxymetazoline cream, 0.5%, 1.0%, or 1.5%, or vehicle, administered once daily (QD) or twice daily (BID) for 28 consecutive days. The primary efficacy endpoint was the proportion of patients with ≥2-grade improvement from baseline on the Clinician Erythema Assessment (CEA) and the Subject Self-Assessment of erythema (SSA-1) on day 28. Safety assessments included treatment-emergent adverse events and dermal tolerability. A total of 356 patients were treated (mean age, 50.0 years; 80.1% female). The proportions of patients achieving the primary endpoint were significantly higher with oxymetazoline 0.5% QD (P=0.049), 1.0% QD (P=0.006), 1.5% QD (P=0.012), 1.0% BID (P=0.021), and 1.5% BID (P=0.006) versus their respective vehicles. For both QD and BID dosing, the efficacy of oxymetazoline 1.0% was greater than the 0.5% dose and comparable to the 1.5% dose. Safety and application-site tolerability were similar across groups. Short-term treatment period. Oxymetazoline 1.0% QD provided the optimal dosing regimen and was selected for evaluation in phase 3 clinical studies. J Drugs Dermatol. 2018;17(3):308-316.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Araújo, Ricardo de A
2010-12-01
This paper presents a hybrid intelligent methodology to design increasing translation invariant morphological operators applied to Brazilian stock market prediction (overcoming the random walk dilemma). The proposed Translation Invariant Morphological Robust Automatic phase-Adjustment (TIMRAA) method consists of a hybrid intelligent model composed of a Modular Morphological Neural Network (MMNN) with a Quantum-Inspired Evolutionary Algorithm (QIEA), which searches for the best time lags to reconstruct the phase space of the time series generator phenomenon and determines the initial (sub-optimal) parameters of the MMNN. Each individual of the QIEA population is further trained by the Back Propagation (BP) algorithm to improve the MMNN parameters supplied by the QIEA. Also, for each prediction model generated, it uses a behavioral statistical test and a phase fix procedure to adjust time phase distortions observed in stock market time series. Furthermore, an experimental analysis is conducted with the proposed method through four Brazilian stock market time series, and the achieved results are discussed and compared to results found with random walk models and the previously introduced Time-delay Added Evolutionary Forecasting (TAEF) and Morphological-Rank-Linear Time-lag Added Evolutionary Forecasting (MRLTAEF) methods. Copyright © 2010 Elsevier Ltd. All rights reserved.
An Examination of Strategy Implementation During Abstract Nonlinguistic Category Learning in Aphasia
Kiran, Swathi
2015-01-01
Purpose Our purpose was to study strategy use during nonlinguistic category learning in aphasia. Method Twelve control participants without aphasia and 53 participants with aphasia (PWA) completed a computerized feedback-based category learning task consisting of training and testing phases. Accuracy rates of categorization in testing phases were calculated. To evaluate strategy use, strategy analyses were conducted over training and testing phases. Participant data were compared with model data that simulated complex multi-cue, single feature, and random pattern strategies. Learning success and strategy use were evaluated within the context of standardized cognitive–linguistic assessments. Results Categorization accuracy was higher among control participants than among PWA. The majority of control participants implemented suboptimal or optimal multi-cue and single-feature strategies by testing phases of the experiment. In contrast, a large subgroup of PWA implemented random patterns, or no strategy, during both training and testing phases of the experiment. Conclusions Person-to-person variability arises not only in category learning ability but also in the strategies implemented to complete category learning tasks. PWA less frequently developed effective strategies during category learning tasks than control participants. Certain PWA may have impairments of strategy development or feedback processing not captured by language and currently probed cognitive abilities. PMID:25908438
Improving practice in community-based settings: a randomized trial of supervision - study protocol.
Dorsey, Shannon; Pullmann, Michael D; Deblinger, Esther; Berliner, Lucy; Kerns, Suzanne E; Thompson, Kelly; Unützer, Jürgen; Weisz, John R; Garland, Ann F
2013-08-10
Evidence-based treatments for child mental health problems are not consistently available in public mental health settings. Expanding availability requires workforce training. However, research has demonstrated that training alone is not sufficient for changing provider behavior, suggesting that ongoing intervention-specific supervision or consultation is required. Supervision is notably under-investigated, particularly as provided in public mental health. The degree to which supervision in this setting includes 'gold standard' supervision elements from efficacy trials (e.g., session review, model fidelity, outcome monitoring, skill-building) is unknown. The current federally-funded investigation leverages the Washington State Trauma-focused Cognitive Behavioral Therapy Initiative to describe usual supervision practices and test the impact of systematic implementation of gold standard supervision strategies on treatment fidelity and clinical outcomes. The study has two phases. We will conduct an initial descriptive study (Phase I) of supervision practices within public mental health in Washington State followed by a randomized controlled trial of gold standard supervision strategies (Phase II), with randomization at the clinician level (i.e., supervisors provide both conditions). Study participants will be 35 supervisors and 130 clinicians in community mental health centers. We will enroll one child per clinician in Phase I (N = 130) and three children per clinician in Phase II (N = 390). We use a multi-level mixed within- and between-subjects longitudinal design. Audio recordings of supervision and therapy sessions will be collected and coded throughout both phases. Child outcome data will be collected at the beginning of treatment and at three and six months into treatment. This study will provide insight into how supervisors can optimally support clinicians delivering evidence-based treatments. Phase I will provide descriptive information, currently unavailable in the literature, about commonly used supervision strategies in community mental health. The Phase II randomized controlled trial of gold standard supervision strategies is, to our knowledge, the first experimental study of gold standard supervision strategies in community mental health and will yield needed information about how to leverage supervision to improve clinician fidelity and client outcomes. ClinicalTrials.gov NCT01800266.
Improving practice in community-based settings: a randomized trial of supervision – study protocol
2013-01-01
Background Evidence-based treatments for child mental health problems are not consistently available in public mental health settings. Expanding availability requires workforce training. However, research has demonstrated that training alone is not sufficient for changing provider behavior, suggesting that ongoing intervention-specific supervision or consultation is required. Supervision is notably under-investigated, particularly as provided in public mental health. The degree to which supervision in this setting includes ‘gold standard’ supervision elements from efficacy trials (e.g., session review, model fidelity, outcome monitoring, skill-building) is unknown. The current federally-funded investigation leverages the Washington State Trauma-focused Cognitive Behavioral Therapy Initiative to describe usual supervision practices and test the impact of systematic implementation of gold standard supervision strategies on treatment fidelity and clinical outcomes. Methods/Design The study has two phases. We will conduct an initial descriptive study (Phase I) of supervision practices within public mental health in Washington State followed by a randomized controlled trial of gold standard supervision strategies (Phase II), with randomization at the clinician level (i.e., supervisors provide both conditions). Study participants will be 35 supervisors and 130 clinicians in community mental health centers. We will enroll one child per clinician in Phase I (N = 130) and three children per clinician in Phase II (N = 390). We use a multi-level mixed within- and between-subjects longitudinal design. Audio recordings of supervision and therapy sessions will be collected and coded throughout both phases. Child outcome data will be collected at the beginning of treatment and at three and six months into treatment. Discussion This study will provide insight into how supervisors can optimally support clinicians delivering evidence-based treatments. Phase I will provide descriptive information, currently unavailable in the literature, about commonly used supervision strategies in community mental health. The Phase II randomized controlled trial of gold standard supervision strategies is, to our knowledge, the first experimental study of gold standard supervision strategies in community mental health and will yield needed information about how to leverage supervision to improve clinician fidelity and client outcomes. Trial registration ClinicalTrials.gov NCT01800266 PMID:23937766
Photonic quantum simulator for unbiased phase covariant cloning
NASA Astrophysics Data System (ADS)
Knoll, Laura T.; López Grande, Ignacio H.; Larotonda, Miguel A.
2018-01-01
We present the results of a linear optics photonic implementation of a quantum circuit that simulates a phase covariant cloner, using two different degrees of freedom of a single photon. We experimentally simulate the action of two mirrored 1→ 2 cloners, each of them biasing the cloned states into opposite regions of the Bloch sphere. We show that by applying a random sequence of these two cloners, an eavesdropper can mitigate the amount of noise added to the original input state and therefore, prepare clones with no bias, but with the same individual fidelity, masking its presence in a quantum key distribution protocol. Input polarization qubit states are cloned into path qubit states of the same photon, which is identified as a potential eavesdropper in a quantum key distribution protocol. The device has the flexibility to produce mirrored versions that optimally clone states on either the northern or southern hemispheres of the Bloch sphere, as well as to simulate optimal and non-optimal cloning machines by tuning the asymmetry on each of the cloning machines.
NASA Astrophysics Data System (ADS)
Diallo, M. S.; Holschneider, M.; Kulesh, M.; Scherbaum, F.; Ohrnberger, M.; Lück, E.
2004-05-01
This contribution is concerned with the estimate of attenuation and dispersion characteristics of surface waves observed on a shallow seismic record. The analysis is based on a initial parameterization of the phase and attenuation functions which are then estimated by minimizing a properly defined merit function. To minimize the effect of random noise on the estimates of dispersion and attenuation we use cross-correlations (in Fourier domain) of preselected traces from some region of interest along the survey line. These cross-correlations are then expressed in terms of the parameterized attenuation and phase functions and the auto-correlation of the so-called source trace or reference trace. Cross-corelation that enter the optimization are selected so as to provide an average estimate of both the attenuation function and the phase (group) velocity of the area under investigation. The advantage of the method over the standard two stations method using Fourier technique is that uncertainties related to the phase unwrapping and the estimate of the number of 2π cycle skip in the phase phase are eliminated. However when mutliple modes arrival are observed, its become merely impossible to obtain reliable estimate the dipsersion curves for the different modes using optimization method alone. To circumvent this limitations we using the presented approach in conjunction with the wavelet propagation operator (Kulesh et al., 2003) which allows the application of band pass filtering in (ω -t) domain, to select a particular mode for the minimization. Also by expressing the cost function in the wavelet domain the optimization can be performed either with respect to the phase, the modulus of the transform or a combination of both. This flexibility in the design of the cost function provides an additional mean of constraining the optimization results. Results from the application of this dispersion and attenuation analysis method are shown for both synthetic and real 2D shallow seismic data sets. M. Kulesh, M. Holschneider, M. S. Diallo, Q. Xie and F. Scherbaum, Modeling of Wave Dispersion Using Wavelet Transfrom (Submitted to Pure and Applied Geophysics).
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Charles, David; Tolleson, Christopher; Davis, Thomas L; Gill, Chandler E; Molinari, Anna L; Bliton, Mark J; Tramontana, Michael G; Salomon, Ronald M; Kao, Chris; Wang, Lily; Hedera, Peter; Phibbs, Fenna T; Neimat, Joseph S; Konrad, Peter E
2012-01-01
Deep brain stimulation provides significant symptomatic benefit for people with advanced Parkinson's disease whose symptoms are no longer adequately controlled with medication. Preliminary evidence suggests that subthalamic nucleus stimulation may also be efficacious in early Parkinson's disease, and results of animal studies suggest that it may spare dopaminergic neurons in the substantia nigra. We report the methodology and design of a novel Phase I clinical trial testing the safety and tolerability of deep brain stimulation in early Parkinson's disease and discuss previous failed attempts at neuroprotection. We recently conducted a prospective, randomized, parallel-group, single-blind pilot clinical trial of deep brain stimulation in early Parkinson's disease. Subjects were randomized to receive either optimal drug therapy or deep brain stimulation plus optimal drug therapy. Follow-up visits occurred every six months for a period of two years and included week-long therapy washouts. Thirty subjects with Hoehn & Yahr Stage II idiopathic Parkinson's disease were enrolled over a period of 32 months. Twenty-nine subjects completed all follow-up visits; one patient in the optimal drug therapy group withdrew from the study after baseline. Baseline characteristics for all thirty patients were not significantly different. This study demonstrates that it is possible to recruit and retain subjects in a clinical trial testing deep brain stimulation in early Parkinson's disease. The results of this trial will be used to support the design of a Phase III, multicenter trial investigating the efficacy of deep brain stimulation in early Parkinson's disease.
Charles, David; Tolleson, Christopher; Davis, Thomas L.; Gill, Chandler E.; Molinari, Anna L.; Bliton, Mark J.; Tramontana, Michael G.; Salomon, Ronald M.; Kao, Chris; Wang, Lily; Hedera, Peter; Phibbs, Fenna T.; Neimat, Joseph S.; Konrad, Peter E.
2014-01-01
Background Deep brain stimulation provides significant symptomatic benefit for people with advanced Parkinson's disease whose symptoms are no longer adequately controlled with medication. Preliminary evidence suggests that subthalamic nucleus stimulation may also be efficacious in early Parkinson's disease, and results of animal studies suggest that it may spare dopaminergic neurons in the substantia nigra. Objective We report the methodology and design of a novel Phase I clinical trial testing the safety and tolerability of deep brain stimulation in early Parkinson's disease and discuss previous failed attempts at neuroprotection. Methods We recently conducted a prospective, randomized, parallel-group, single-blind pilot clinical trial of deep brain stimulation in early Parkinson's disease. Subjects were randomized to receive either optimal drug therapy or deep brain stimulation plus optimal drug therapy. Follow-up visits occurred every six months for a period of two years and included week-long therapy washouts. Results Thirty subjects with Hoehn & Yahr Stage II idiopathic Parkinson's disease were enrolled over a period of 32 months. Twenty-nine subjects completed all follow-up visits; one patient in the optimal drug therapy group withdrew from the study after baseline. Baseline characteristics for all thirty patients were not significantly different. Conclusions This study demonstrates that it is possible to recruit and retain subjects in a clinical trial testing deep brain stimulation in early Parkinson's disease. The results of this trial will be used to support the design of a Phase III, multicenter trial investigating the efficacy of deep brain stimulation in early Parkinson's disease. PMID:23938229
Establishment and validation for the theoretical model of the vehicle airbag
NASA Astrophysics Data System (ADS)
Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao
2015-05-01
The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.
Molenaar, Heike; Boehm, Robert; Piepho, Hans-Peter
2018-01-01
Robust phenotypic data allow adequate statistical analysis and are crucial for any breeding purpose. Such data is obtained from experiments laid out to best control local variation. Additionally, experiments frequently involve two phases, each contributing environmental sources of variation. For example, in a former experiment we conducted to evaluate production related traits in Pelargonium zonale, there were two consecutive phases, each performed in a different greenhouse. Phase one involved the propagation of the breeding strains to obtain the stem cutting count, and phase two involved the assessment of root formation. The evaluation of the former study raised questions regarding options for improving the experimental layout: (i) Is there a disadvantage to using exactly the same design in both phases? (ii) Instead of generating a separate layout for each phase, can the design be optimized across both phases, such that the mean variance of a pair-wise treatment difference (MVD) can be decreased? To answer these questions, alternative approaches were explored to generate two-phase designs either in phase-wise order (Option 1) or across phases (Option 2). In Option 1 we considered the scenarios (i) using in both phases the same experimental design and (ii) randomizing each phase separately. In Option 2, we considered the scenarios (iii) generating a single design with eight replicates and splitting these among the two phases, (iv) separating the block structure across phases by dummy coding, and (v) design generation with optimal alignment of block units in the two phases. In both options, we considered the same or different block structures in each phase. The designs were evaluated by the MVD obtained by the intra-block analysis and the joint inter-block–intra-block analysis. The smallest MVD was most frequently obtained for designs generated across phases rather than for each phase separately, in particular when both phases of the design were separated with a single pseudo-level. The joint optimization ensured that treatment concurrences were equally balanced across pairs, one of the prerequisites for an efficient design. The proposed alternative approaches can be implemented with any model-based design packages with facilities to formulate linear models for treatment and block structures. PMID:29354145
Eeckhout, Eric; Berger, Alexandre; Roguelov, Christan; Lyon, Xavier; Imsand, Christophe; Fivaz-Arbane, Malika; Girod, Grégoire; De Benedetti, Edoardo
2003-08-01
IVUS is considered as the most accurate tool for the assessment of optimal stent deployment. Direct stenting has shown to be a safe, efficient, and resource-saving procedure in selected patients. In a prospective 1-month feasibility trial, a new combined IVUS-coronary stent delivery platform (Josonics Flex, Jomed, Helsingborn, Sweden) was evaluated during direct stenting in consecutive patients considered eligible for direct stenting. The feasibility endpoint was successful stent deployment without any clinical adverse event, while the efficacy endpoint was strategic adaptation according to standard IVUS criteria for optimal stent deployment at the intermediate phase (after a result considered angiographically optimal) and at the end of the intervention (after optimization according to IVUS standards). A total of 16 patients were successfully treated with this device without any major clinical complication. At the intermediate phase, optimal stent deployment was achieved in four patients only, while at the end only one patient had nonoptimal IVUS stent deployment. In particular, the minimal in-stent cross-section area increased from 6.3 +/- 1.2 to 8.3 +/- 2.5 mm(2). These preliminary data demonstrate the feasibility of direct stenting with a combined IVUS-stent catheter in selected patients and confirm the results from larger randomized trials on the impact of IVUS on strategic adaptations during coronary stent placement. Copyright 2003 Wiley-Liss, Inc.
Phase measurement error in summation of electron holography series.
McLeod, Robert A; Bergen, Michael; Malac, Marek
2014-06-01
Off-axis electron holography is a method for the transmission electron microscope (TEM) that measures the electric and magnetic properties of a specimen. The electrostatic and magnetic potentials modulate the electron wavefront phase. The error in measurement of the phase therefore determines the smallest observable changes in electric and magnetic properties. Here we explore the summation of a hologram series to reduce the phase error and thereby improve the sensitivity of electron holography. Summation of hologram series requires independent registration and correction of image drift and phase wavefront drift, the consequences of which are discussed. Optimization of the electro-optical configuration of the TEM for the double biprism configuration is examined. An analytical model of image and phase drift, composed of a combination of linear drift and Brownian random-walk, is derived and experimentally verified. The accuracy of image registration via cross-correlation and phase registration is characterized by simulated hologram series. The model of series summation errors allows the optimization of phase error as a function of exposure time and fringe carrier frequency for a target spatial resolution. An experimental example of hologram series summation is provided on WS2 fullerenes. A metric is provided to measure the object phase error from experimental results and compared to analytical predictions. The ultimate experimental object root-mean-square phase error is 0.006 rad (2π/1050) at a spatial resolution less than 0.615 nm and a total exposure time of 900 s. The ultimate phase error in vacuum adjacent to the specimen is 0.0037 rad (2π/1700). The analytical prediction of phase error differs with the experimental metrics by +7% inside the object and -5% in the vacuum, indicating that the model can provide reliable quantitative predictions. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yuanyuan; Gao, Guanjun; Zhang, Jie; Zhang, Kai; Chen, Sai; Yu, Xiaosong; Gu, Wanyi
2015-06-01
A simplex-method based optimizing (SMO) strategy is proposed to improve the transmission performance for dispersion uncompensated (DU) coherent optical systems with non-identical spans. Through analytical expression of quality of transmission (QoT), this strategy improves the Q factors effectively, while minimizing the number of erbium-doped optical fiber amplifier (EDFA) that needs to be optimized. Numerical simulations are performed for 100 Gb/s polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) channels over 10-span standard single mode fiber (SSMF) with randomly distributed span-lengths. Compared to the EDFA configurations with complete span loss compensation, the Q factor of the SMO strategy is improved by approximately 1 dB at the optimal transmitter launch power. Moreover, instead of adjusting the gains of all the EDFAs to their optimal value, the number of EDFA that needs to be adjusted for SMO is reduced from 8 to 2, showing much less tuning costs and almost negligible performance degradation.
Di Marzo, Vincenzo; Centonze, Diego
2015-03-01
Regulatory authorities admit clinical studies with an initial enrichment phase to select patients that respond to treatment before randomization (Enriched Design Studies; EDSs). The trial period aims to prevent long-term drug exposure risks in patients with limited chances of improvement while optimizing costs. In EDSs for symptom control therapies providing early improvements and without a wash-out period, it is difficult to show further improvements and thus large therapeutic gains versus placebo. Moreover, in trials with cannabinoids, the therapeutic gains can be further biased in the postenrichment randomized phase because of carryover and other effects. The aims of the present review article are to examine the placebo effects in the enrichment and postenrichment phases of an EDS with Δ(9) -tetrahydrocannabinol and cannabidiol (THC/CBD) oromucosal spray in patients with multiple sclerosis (MS) spasticity and to discuss the possible causes of maintained efficacy after randomization in the placebo-allocated patients. The overall mean therapeutic gain of THC/CBD spray over placebo in resistant MS spasticity after 16 weeks can be estimated as a ~1.27-point improvement on the spasticity 0-10 Numerical Rating Scale (NRS; ~-20.1% of the baseline NRS score). We conclude that careful interpretation of the results of EDSs is required, especially when cannabinoid-based medications are being investigated. © 2014 John Wiley & Sons Ltd.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Joint Waveform Optimization and Adaptive Processing for Random-Phase Radar Signals
2014-01-01
extended targets,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 1, pp. 42– 55, June 2007. [2] S. Sen and A. Nehorai, “ OFDM mimo ...radar compared to traditional waveforms. I. INTRODUCTION There has been much recent interest in waveform design for multiple-input, multiple-output ( MIMO ...amplitude. When the resolution capability of the MIMO radar system is of interest, the transmit waveform can be designed to sharpen the radar ambiguity
Buchner, Anton; Elsässer, Reiner; Bias, Peter
2014-11-01
This dose-ranging study was conducted to identify the optimal fixed dose of lipegfilgrastim compared with pegfilgrastim 6.0 mg for the provision of neutrophil support during myelosuppressive chemotherapy in patients with breast cancer. A phase 2 study was conducted in which 208 chemotherapy-naive patients were randomized to receive lipegfilgrastim 3.0, 4.5, or 6.0 mg or pegfilgrastim 6.0 mg. Study drugs were administered as a single subcutaneous injection on day 2 of each chemotherapy cycle (doxorubicin/docetaxel on day 1 for four 3-week cycles). The primary outcome measure was duration of severe neutropenia (DSN) in cycle 1. Patients treated with lipegfilgrastim experienced shorter DSN in cycle 1 with higher doses. The mean DSN was 0.76 days in the lipegfilgrastim 6.0-mg group and 0.87 days in the pegfilgrastim 6.0-mg group, with no significant differences between treatment groups. Treatment with lipegfilgrastim 6.0 mg was consistently associated with a higher absolute neutrophil count (ANC) at nadir, shorter ANC recovery time, and a similar safety and tolerability profile compared with pegfilgrastim. This phase 2 study demonstrated that lipegfilgrastim 6.0 mg is the optimal dose for patients with breast cancer and provides neutrophil support that is at least equivalent to the standard 6.0-mg fixed dose of pegfilgrastim.
Proetel, Ulrike; Pletsch, Nadine; Lauseker, Michael; Müller, Martin C; Hanfstein, Benjamin; Krause, Stefan W; Kalmanti, Lida; Schreiber, Annette; Heim, Dominik; Baerlocher, Gabriela M; Hofmann, Wolf-Karsten; Lange, Elisabeth; Einsele, Hermann; Wernli, Martin; Kremers, Stephan; Schlag, Rudolf; Müller, Lothar; Hänel, Mathias; Link, Hartmut; Hertenstein, Bernd; Pfirrman, Markus; Hochhaus, Andreas; Hasford, Joerg; Hehlmann, Rüdiger; Saußele, Susanne
2014-07-01
The impact of imatinib dose on response rates and survival in older patients with chronic myeloid leukemia in chronic phase has not been studied well. We analyzed data from the German CML-Study IV, a randomized five-arm treatment optimization study in newly diagnosed BCR-ABL-positive chronic myeloid leukemia in chronic phase. Patients randomized to imatinib 400 mg/day (IM400) or imatinib 800 mg/day (IM800) and stratified according to age (≥65 years vs. <65 years) were compared regarding dose, response, adverse events, rates of progression, and survival. The full 800 mg dose was given after a 6-week run-in period with imatinib 400 mg/day. The dose could then be reduced according to tolerability. A total of 828 patients were randomized to IM400 or IM800. Seven hundred eighty-four patients were evaluable (IM400, 382; IM800, 402). One hundred ten patients (29 %) on IM400 and 83 (21 %) on IM800 were ≥65 years. The median dose per day was lower for patients ≥65 years on IM800, with the highest median dose in the first year (466 mg/day for patients ≥65 years vs. 630 mg/day for patients <65 years). Older patients on IM800 achieved major molecular remission and deep molecular remission as fast as younger patients, in contrast to standard dose imatinib with which older patients achieved remissions much later than younger patients. Grades 3 and 4 adverse events were similar in both age groups. Five-year relative survival for older patients was comparable to that of younger patients. We suggest that the optimal dose for older patients is higher than 400 mg/day. ClinicalTrials.gov identifier: NCT00055874
Borges, Cleber N; Bruns, Roy E; Almeida, Aline A; Scarminio, Ieda S
2007-07-09
A composite simplex centroid-simplex centroid mixture design is proposed for simultaneously optimizing two mixture systems. The complementary model is formed by multiplying special cubic models for the two systems. The design was applied to the simultaneous optimization of both mobile phase chromatographic mixtures and extraction mixtures for the Camellia sinensis Chinese tea plant. The extraction mixtures investigated contained varying proportions of ethyl acetate, ethanol and dichloromethane while the mobile phase was made up of varying proportions of methanol, acetonitrile and a methanol-acetonitrile-water (MAW) 15%:15%:70% mixture. The experiments were block randomized corresponding to a split-plot error structure to minimize laboratory work and reduce environmental impact. Coefficients of an initial saturated model were obtained using Scheffe-type equations. A cumulative probability graph was used to determine an approximate reduced model. The split-plot error structure was then introduced into the reduced model by applying generalized least square equations with variance components calculated using the restricted maximum likelihood approach. A model was developed to calculate the number of peaks observed with the chromatographic detector at 210 nm. A 20-term model contained essentially all the statistical information of the initial model and had a root mean square calibration error of 1.38. The model was used to predict the number of peaks eluted in chromatograms obtained from extraction solutions that correspond to axial points of the simplex centroid design. The significant model coefficients are interpreted in terms of interacting linear, quadratic and cubic effects of the mobile phase and extraction solution components.
Emoto, Akira; Fukuda, Takashi
2013-02-20
For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.
Bellmunt, J; Kerst, J M; Vázquez, F; Morales-Barrera, R; Grande, E; Medina, A; González Graguera, M B; Rubio, G; Anido, U; Fernández Calvo, O; González-Billalabeitia, E; Van den Eertwegh, A J M; Pujol, E; Perez-Gracia, J L; González Larriba, J L; Collado, R; Los, M; Maciá, S; De Wit, R
2017-07-01
Despite the advent of immunotherapy in urothelial cancer, there is still a need to find effective cytotoxic agents beyond first and second lines. Vinflunine is the only treatment approved in this setting by the European Medicines Agency and taxanes are also widely used in second line. Cabazitaxel is a taxane with activity in docetaxel-refractory cancers. A randomized study was conducted to compare its efficacy versus vinflunine. This is a multicenter, randomized, open-label, phase II/III study, following a Simon's optimal method with stopping rules based on an interim futility analysis and a formal efficacy analysis at the end of the phase II. ECOG Performance Status, anaemia and liver metastases were stratification factors. Primary objectives were overall response rate for the phase II and overall survival for the phase III. Seventy patients were included in the phase II across 19 institutions in Europe. Baseline characteristics were well balanced between the two arms. Three patients (13%) obtained a partial response on cabazitaxel (95% CI 2.7-32.4) and six patients (30%) in the vinflunine arm (95% CI 11.9-54.3). Median progression-free survival for cabazitaxel was 1.9 versus 2.9 months for vinflunine (P = 0.039). The study did not proceed to phase III since the futility analysis showed a lack of efficacy of cabazitaxel. A trend for overall survival benefit was found favouring vinflunine (median 7.6 versus 5.5 months). Grade 3- to 4-related adverse events were seen in 41% patients with no difference between the two arms. This phase II/III second line bladder study comparing cabazitaxel with vinflunine was closed when the phase II showed a lack of efficacy of the cabazitaxel arm. Vinflunine results were consistent with those known previously. NCT01830231. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A robust optimization model for distribution and evacuation in the disaster response phase
NASA Astrophysics Data System (ADS)
Fereiduni, Meysam; Shahanaghi, Kamran
2017-03-01
Natural disasters, such as earthquakes, affect thousands of people and can cause enormous financial loss. Therefore, an efficient response immediately following a natural disaster is vital to minimize the aforementioned negative effects. This research paper presents a network design model for humanitarian logistics which will assist in location and allocation decisions for multiple disaster periods. At first, a single-objective optimization model is presented that addresses the response phase of disaster management. This model will help the decision makers to make the most optimal choices in regard to location, allocation, and evacuation simultaneously. The proposed model also considers emergency tents as temporary medical centers. To cope with the uncertainty and dynamic nature of disasters, and their consequences, our multi-period robust model considers the values of critical input data in a set of various scenarios. Second, because of probable disruption in the distribution infrastructure (such as bridges), the Monte Carlo simulation is used for generating related random numbers and different scenarios; the p-robust approach is utilized to formulate the new network. The p-robust approach can predict possible damages along pathways and among relief bases. We render a case study of our robust optimization approach for Tehran's plausible earthquake in region 1. Sensitivity analysis' experiments are proposed to explore the effects of various problem parameters. These experiments will give managerial insights and can guide DMs under a variety of conditions. Then, the performances of the "robust optimization" approach and the "p-robust optimization" approach are evaluated. Intriguing results and practical insights are demonstrated by our analysis on this comparison.
Christoph, Annette; Eerdekens, Marie-Henriette; Kok, Maurits; Volkers, Gisela; Freynhagen, Rainer
2017-09-01
Chronic low back pain (LBP) is a common condition, usually with the involvement of nociceptive and neuropathic pain components, high economic burden and impact on quality of life. Cebranopadol is a potent, first-in-class drug candidate with a novel mechanistic approach, combining nociceptin/orphanin FQ peptide and opioid peptide receptor agonism. We conducted the first phase II, randomized, double-blind, placebo- and active-controlled trial, evaluating the analgesic efficacy, safety, and tolerability of cebranopadol in patients with moderate-to-severe chronic LBP with and without neuropathic pain component. Patients were treated for 14 weeks with cebranopadol 200, 400, or 600 μg once daily, tapentadol 200 mg twice daily, or placebo. The primary efficacy endpoints were the change from baseline pain to the weekly average 24-hour pain during the entire 12 weeks and during week 12 of the maintenance phase. Cebranopadol demonstrated analgesic efficacy, with statistically significant and clinically relevant improvements over placebo for all doses as did tapentadol. The responder analysis (≥30% or ≥50% pain reduction) confirmed these results. Cebranopadol and tapentadol displayed beneficial effects on sleep and functionality. Cebranopadol treatment was safe, with higher doses leading to higher treatment discontinuations because of treatment-emergent adverse events occurring mostly during titration. Those patients reaching the target doses had an acceptable tolerability profile. The incidence rate of most frequently reported treatment-emergent adverse events during maintenance phase was ≤10%. Although further optimizing the titration scheme to the optimal dose for individual patients is essential, cebranopadol is a new drug candidate with a novel mechanistic approach for potential chronic LBP treatment.
Optimal configuration of microstructure in ferroelectric materials by stochastic optimization
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-07-01
An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to be nearly three times as that of the single crystal. Our optimization model provide designs for materials with enhanced piezoelectric performance, which would stimulate further studies involving materials possessing higher spontaneous polarization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hong; Wang, Shaobu; Fan, Rui
This report summaries the work performed under the LDRD project on the preliminary study on knowledge automation, where specific focus has been made on the investigation of the impact of uncertainties of human decision making onto the optimization of the process operation. At first the statistics on signals from the Brain-Computing Interface (BCI) is analyzed so as to obtain the uncertainties characterization of human operators during the decision making phase using the electroencephalogram (EEG) signals. This is then followed by the discussions of an architecture that reveals the equivalence between optimization and closed loop feedback control design, where it hasmore » been shown that all the optimization problems can be transferred into the control design problem for closed loop systems. This has led to a “closed loop” framework, where the structure of the decision making is shown to be subjected to both process disturbances and controller’s uncertainties. The latter can well represent the uncertainties or randomness occurred during human decision making phase. As a result, a stochastic optimization problem has been formulated and a novel solution has been proposed using probability density function (PDF) shaping for both the cost function and the constraints using stochastic distribution control concept. A sufficient condition has been derived that guarantees the convergence of the optimal solution and discussions have been made for both the total probabilistic solution and chanced constrained optimization which have been well-studied in optimal power flows (OPF) area. A simple case study has been carried out for the economic dispatch of powers for a grid system when there are distributed energy resources (DERs) in the system, and encouraging results have been obtained showing that a significant savings on the generation cost can be expected.« less
Ant groups optimally amplify the effect of transiently informed individuals
NASA Astrophysics Data System (ADS)
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-01-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge. PMID:26218613
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Shen, J; Stoker, J
2015-06-15
Purpose: To compare the impact of interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans to treat lung cancer. Methods: Two IMPT plans were created for 11 non-small-cell-lung-cancer cases with 6–14 mm spots. 3D robust optimization generated plans on average CTs with the internal gross tumor volume density overridden to deliver 66 CGyE in 33 fractions to the internal target volume (ITV). 4D robust optimization generated plans on 4D CTs with the delivery of prescribed dose to the clinical target volume (CTV). In 4D optimization, the CTV of individual 4D CT phases received non-uniform doses tomore » achieve a uniform cumulative dose. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Indices from dose-volume histograms were used to compare target coverage, dose homogeneity, and normal-tissue sparing. DVH indices were compared using Wilcoxon test. Results: Given the presence of interplay effect, 4D robust optimization produced IMPT plans with better target coverage and homogeneity, but slightly worse normal tissue sparing compared to 3D robust optimization (unit: Gy) [D95% ITV: 63.5 vs 62.0 (p=0.014), D5% - D95% ITV: 6.2 vs 7.3 (p=0.37), D1% spinal cord: 29.0 vs 29.5 (p=0.52), Dmean total lung: 14.8 vs 14.5 (p=0.12), D33% esophagus: 33.6 vs 33.1 (p=0.28)]. The improvement of target coverage (D95%,4D – D95%,3D) was related to the ratio RMA3/(TVx10−4), with RMA and TV being respiratory motion amplitude (RMA) and tumor volume (TV), respectively. Peak benefit was observed at ratios between 2 and 10. This corresponds to 125 – 625 cm3 TV with 0.5-cm RMA. Conclusion: 4D optimization produced more interplay-effect-resistant plans compared to 3D optimization. It is most effective when respiratory motion is modest compared to TV. NIH/NCI K25CA168984; Eagles Cancer Research Career Development; The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research; Mayo ASU Seed Grant; The Kemper Marley Foundation.« less
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
Theory of inhomogeneous quantum systems. III. Variational wave functions for Fermi fluids
NASA Astrophysics Data System (ADS)
Krotscheck, E.
1985-04-01
We develop a general variational theory for inhomogeneous Fermi systems such as the electron gas in a metal surface, the surface of liquid 3He, or simple models of heavy nuclei. The ground-state wave function is expressed in terms of two-body correlations, a one-body attenuation factor, and a model-system Slater determinant. Massive partial summations of cluster expansions are performed by means of Born-Green-Yvon and hypernetted-chain techniques. An optimal single-particle basis is generated by a generalized Hartree-Fock equation in which the two-body correlations screen the bare interparticle interaction. The optimization of the pair correlations leads to a state-averaged random-phase-approximation equation and a strictly microscopic determination of the particle-hole interaction.
Guix-Comellas, Eva Maria; Rozas-Quesada, Librada; Velasco-Arnaiz, Eneritz; Ferres-Canals, Ariadna; Estrada-Masllorens, Joan Maria; Force-Sanmartín, Enriqueta; Noguera-Julian, Antoni
2018-05-03
To evaluate the association of a new nursing intervention on the adherence to antituberculosis treatment in a pediatric cohort (<18 years). Tuberculosis remains a public health problem worldwide. The risk of developing tuberculosis after primary infection and its severity are higher in children. Proper adherence to antituberculosis treatment is critical for disease control. Non-randomized controlled trial; Phase 1, retrospective (2011-2013), compared with Phase 2, prospective with intervention (2015-2016), in a referral center for pediatric tuberculosis in Spain (NCT03230409). A total of 359 patients who received antituberculosis drugs after close contact with a smear-positive patient (primary chemoprophylaxis) or were treated for latent tuberculosis infection or tuberculosis disease were included, 261 in Phase 1 and 98 in Phase 2. In phase 2, a new nurse-led intervention was implemented in all patients and included two educational steps (written information in the child's native language and follow-up telephone calls) and two monitoring steps (Eidus-Hamilton test and follow-up questionnaire) that were exclusively carried out by nurses. Adherence to antituberculosis treatment increased from 74.7% in Phase 1 to 87.8% in Phase 2 (p=0.014; Chi-square test), after the implementation of the nurse-led intervention. In Phase 2, non-adherence was only associated with being born abroad (28.6% versus 7.8%; p=0.019; Chi-square test) and with foreign origin families (27.3% versus 0%; p<0.0001; Chi-square test). The nurse-led intervention was associated to an increase in adherence to antituberculosis treatment. Immigrant-related variables remained major risk factors for sub-optimal adherence in a low-endemic setting. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Weiss, Roger D.; Potter, Jennifer Sharpe; Provost, Scott E.; Huang, Zhen; Jacobs, Petra; Hasson, Albert; Lindblad, Robert; Connery, Hilary Smith; Prather, Kristi; Ling, Walter
2010-01-01
The National Institute on Drug Abuse Clinical Trials Network launched the Prescription Opioid Addiction Treatment Study (POATS) in response to rising rates of prescription opioid dependence and gaps in understanding the optimal course of treatment for this population. POATS employed a multi-site, two-phase adaptive, sequential treatment design to approximate clinical practice. The study took place at 10 community treatment programs around the United States. Participants included men and women age ≥18 who met Diagnostic and Statistical Manual, 4th Edition criteria for dependence upon prescription opioids, with physiologic features; those with a prominent history of heroin use (according to pre-specified criteria) were excluded. All participants received buprenorphine/naloxone (bup/nx). Phase 1 consisted of 4 weeks of bup/nx treatment, including a 14-day dose taper, with 8 weeks of follow-up. Phase 1 participants were monitored for treatment response during these 12 weeks. Those who relapsed to opioid use, as defined by pre-specified criteria, were invited to enter Phase 2; Phase 2 consisted of 12 weeks of bup/nx stabilization treatment, followed by a 4-week taper and 8 weeks of post-treatment follow-up. Participants were randomized at the beginning of Phase 1 to receive bup/nx, paired with either Standard Medical Management (SMM) or Enhanced Medical Management (EMM; defined as SMM plus individual drug counseling). Eligible participants entering Phase 2 were re-randomized to either EMM or SMM. POATS was developed to determine what benefit, if any, EMM offers over SMM in short-term and longer-term treatment paradigm. This paper describes the rationale and design of the study. PMID:20116457
Weiss, Roger D; Potter, Jennifer Sharpe; Provost, Scott E; Huang, Zhen; Jacobs, Petra; Hasson, Albert; Lindblad, Robert; Connery, Hilary Smith; Prather, Kristi; Ling, Walter
2010-03-01
The National Institute on Drug Abuse Clinical Trials Network launched the Prescription Opioid Addiction Treatment Study (POATS) in response to rising rates of prescription opioid dependence and gaps in understanding the optimal course of treatment for this population. POATS employed a multi-site, two-phase adaptive, sequential treatment design to approximate clinical practice. The study took place at 10 community treatment programs around the United States. Participants included men and women age > or =18 who met Diagnostic and Statistical Manual, 4th Edition criteria for dependence upon prescription opioids, with physiologic features; those with a prominent history of heroin use (according to pre-specified criteria) were excluded. All participants received buprenorphine/naloxone (bup/nx). Phase 1 consisted of 4 weeks of bup/nx treatment, including a 14-day dose taper, with 8 weeks of follow-up. Phase 1 participants were monitored for treatment response during these 12 weeks. Those who relapsed to opioid use, as defined by pre-specified criteria, were invited to enter Phase 2; Phase 2 consisted of 12 weeks of bup/nx stabilization treatment, followed by a 4-week taper and 8 weeks of post-treatment follow-up. Participants were randomized at the beginning of Phase 1 to receive bup/nx, paired with either Standard Medical Management (SMM) or Enhanced Medical Management (EMM; defined as SMM plus individual drug counseling). Eligible participants entering Phase 2 were re-randomized to either EMM or SMM. POATS was developed to determine what benefit, if any, EMM offers over SMM in short-term and longer-term treatment paradigm. This paper describes the rationale and design of the study. Copyright 2010 Elsevier Inc. All rights reserved.
Evolutionary-Optimized Photonic Network Structure in White Beetle Wing Scales.
Wilts, Bodo D; Sheng, Xiaoyuan; Holler, Mirko; Diaz, Ana; Guizar-Sicairos, Manuel; Raabe, Jörg; Hoppe, Robert; Liu, Shu-Hao; Langford, Richard; Onelli, Olimpia D; Chen, Duyu; Torquato, Salvatore; Steiner, Ullrich; Schroer, Christian G; Vignolini, Silvia; Sepe, Alessandro
2018-05-01
Most studies of structural color in nature concern periodic arrays, which through the interference of light create color. The "color" white however relies on the multiple scattering of light within a randomly structured medium, which randomizes the direction and phase of incident light. Opaque white materials therefore must be much thicker than periodic structures. It is known that flying insects create "white" in extremely thin layers. This raises the question, whether evolution has optimized the wing scale morphology for white reflection at a minimum material use. This hypothesis is difficult to prove, since this requires the detailed knowledge of the scattering morphology combined with a suitable theoretical model. Here, a cryoptychographic X-ray tomography method is employed to obtain a full 3D structural dataset of the network morphology within a white beetle wing scale. By digitally manipulating this 3D representation, this study demonstrates that this morphology indeed provides the highest white retroreflection at the minimum use of material, and hence weight for the organism. Changing any of the network parameters (within the parameter space accessible by biological materials) either increases the weight, increases the thickness, or reduces reflectivity, providing clear evidence for the evolutionary optimization of this morphology. © 2017 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Wang, Shengjun; Jiang, Hongli; Yu, Qin; She, Bin; Mao, Bing
2017-01-05
The common cold is a common and frequent respiratory disease mainly caused by viral infection of the upper respiratory tract. Chinese herbal medicine has been increasingly prescribed to treat the common cold; however, there is a lack of evidence to support the wide utility of this regimen. This protocol describes an ongoing phase II randomized controlled clinical trial, based on the theory of traditional Chinese medicine (TCM), with the objective of evaluating the efficacy and safety of Lian-Ju-Gan-Mao capsules (LJGMC), a Chinese patent medicine, compared with placebo in patients suffering from the common cold with wind-heat syndrome (CCWHS). This is a multicenter, randomized, double-blind, placebo-controlled phase II clinical trial. A total of 240 patients will be recruited and randomly assigned to a high-dose group, medium-dose group, low-dose group, and placebo-matched group in a 1:1:1:1 ratio. The treatment course is 3 consecutive days, with a 5-day follow-up. The primary outcome is time to all symptoms' clearance. Secondary outcomes include time to the disappearance of primary symptoms and each secondary symptom, time to fever relief, time to fever clearance, and change in TCM symptom and sign scores. This trial is a well-designed study according to principles and regulations issued by the China Food and Drug Administration (CFDA). The results will provide high-quality evidence on the efficacy and safety of LJGMC in treating CCWHS and help to optimize the dose for the next phase III clinical trial. Moreover, the protocol presents a detailed and practical methodology for future clinical trials of drugs developed based on TCM. Chinese Clinical Trial Registry, ChiCTR-IPR-15006504 . Registered on 4 June 2015.
Wasdell, Michael B; Jan, James E; Bomben, Melissa M; Freeman, Roger D; Rietveld, Wop J; Tai, Joseph; Hamilton, Donald; Weiss, Margaret D
2008-01-01
The purpose of this study was to determine the efficacy of controlled-release (CR) melatonin in the treatment of delayed sleep phase syndrome and impaired sleep maintenance of children with neurodevelopmental disabilities including autistic spectrum disorders. A randomized double-blind, placebo-controlled crossover trial of CR melatonin (5 mg) followed by a 3-month open-label study was conducted during which the dose was gradually increased until the therapy showed optimal beneficial effects. Sleep characteristics were measured by caregiver who completed somnologs and wrist actigraphs. Clinician rating of severity of the sleep disorder and improvement from baseline, along with caregiver ratings of global functioning and family stress were also obtained. Fifty-one children (age range 2-18 years) who did not respond to sleep hygiene intervention were enrolled. Fifty patients completed the crossover trial and 47 completed the open-label phase. Recordings of total night-time sleep and sleep latency showed significant improvement of approximately 30 min. Similarly, significant improvement was observed in clinician and parent ratings. There was additional improvement in the open-label somnolog measures of sleep efficiency and the longest sleep episode in the open-label phase. Overall, the therapy improved the sleep of 47 children and was effective in reducing family stress. Children with neurodevelopmental disabilities, who had treatment resistant chronic delayed sleep phase syndrome and impaired sleep maintenance, showed improvement in melatonin therapy.
Vittengl, Jeffrey R; Anna Clark, Lee; Thase, Michael E; Jarrett, Robin B
2017-07-01
Responders to acute-phase cognitive therapy (A-CT) for major depressive disorder (MDD) often relapse or recur, but continuation-phase cognitive therapy (C-CT) or fluoxetine reduces risks for some patients. We tested composite moderators of C-CT versus fluoxetine's preventive effects to inform continuation treatment selection. Responders to A-CT for MDD judged to be at higher risk for relapse due to unstable or partial remission (N=172) were randomized to 8 months of C-CT or fluoxetine with clinical management and assessed, free from protocol treatment, for 24 additional months. Pre-continuation-treatment characteristics that in survival analyses moderated treatments' effects on relapse over 8 months of continuation-phase treatment (residual symptoms and negative temperament) and on relapse/recurrence over the full observation period's 32 months (residual symptoms and age) were combined to estimate the potential advantage of C-CT versus fluoxetine for individual patients. Assigning patients to optimal continuation treatment (i.e., to C-CT or fluoxetine, depending on patients' pre-continuation-treatment characteristics) resulted in absolute reduction of relapse or recurrence risk by 16-21% compared to the other non-optimal treatment. Although these novel results require replication before clinical application, selecting optimal continuation treatment (i.e., personalizing treatment) for higher risk A-CT responders may decrease risks of MDD relapse and recurrence substantively. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Kubo, Emi; Yamamoto, Noboru; Nokihara, Hiroshi; Fujiwara, Yutaka; Horinouchi, Hidehito; Kanda, Shintaro; Goto, Yasushi; Ohe, Yuichiro
2017-01-01
The epidermal growth factor receptor (EGFR) tyrosine kinase inhibitor gefitinib was initially approved in Japan in 2002 for the treatment of advanced or metastatic non-small-cell lung cancer (NSCLC); however, the optimal order of conventional cytotoxic chemotherapy (carboplatin and paclitaxel) and gefitinib administration has not been determined. We conducted a randomized phase II study of carboplatin and paclitaxel followed by gefitinib vs. gefitinib followed by carboplatin and paclitaxel to select a candidate for further development in a phase III study of chemotherapy-naïve patients with advanced or metastatic NSCLC, regardless of their EGFR mutation status. A total of 97 patients meeting this description were randomly assigned to arm A (carboplatin and paclitaxel followed by gefitinib; n=49) or B (gefitinib followed by carboplatin and paclitaxel; n=48) from June, 2003 to October, 2005. Carboplatin and paclitaxel were administered in 4 cycles every 3 weeks; gefitinib was continued until disease progression or development of unacceptable toxicity. The primary endpoint was overall survival; the secondary endpoints were response rate and adverse event prevalence. The median overall follow-up was 65.1 months (range, 28.7-75.1 months). The major toxicities were hematological (carboplatin and paclitaxel) or skin rash, diarrhea and hepatic dysfunction (gefitinib). Interstitial lung disease was observed in 1 patient from each arm. In arms A and B, the carboplatin and paclitaxel response rate, gefitinib response rate, and median survival durations were 34.8 and 26.5%, 33.3 and 35.7%, and 18.8 and 17.2 months, respectively. Arm A was selected for a subsequent phase III study.
Speckle phase near random surfaces
NASA Astrophysics Data System (ADS)
Chen, Xiaoyi; Cheng, Chuanfu; An, Guoqiang; Han, Yujing; Rong, Zhenyu; Zhang, Li; Zhang, Meina
2018-03-01
Based on Kirchhoff approximation theory, the speckle phase near random surfaces with different roughness is numerically simulated. As expected, the properties of the speckle phase near the random surfaces are different from that in far field. In addition, as scattering distances and roughness increase, the average fluctuations of the speckle phase become larger. Unusually, the speckle phase is somewhat similar to the corresponding surface topography. We have performed experiments to verify the theoretical simulation results. Studies in this paper contribute to understanding the evolution of speckle phase near a random surface and provide a possible way to identify a random surface structure based on its speckle phase.
Satlin, Andrew; Wang, Jinping; Logovinsky, Veronika; Berry, Scott; Swanson, Chad; Dhadda, Shobha; Berry, Donald A
2016-01-01
Recent failures in phase 3 clinical trials in Alzheimer's disease (AD) suggest that novel approaches to drug development are urgently needed. Phase 3 risk can be mitigated by ensuring that clinical efficacy is established before initiating confirmatory trials, but traditional phase 2 trials in AD can be lengthy and costly. We designed a Bayesian adaptive phase 2, proof-of-concept trial with a clinical endpoint to evaluate BAN2401, a monoclonal antibody targeting amyloid protofibrils. The study design used dose response and longitudinal modeling. Simulations were used to refine study design features to achieve optimal operating characteristics. The study design includes five active treatment arms plus placebo, a clinical outcome, 12-month primary endpoint, and a maximum sample size of 800. The average overall probability of success is ≥80% when at least one dose shows a treatment effect that would be considered clinically meaningful. Using frequent interim analyses, the randomization ratios are adapted based on the clinical endpoint, and the trial can be stopped for success or futility before full enrollment. Bayesian statistics can enhance the efficiency of analyzing the study data. The adaptive randomization generates more data on doses that appear to be more efficacious, which can improve dose selection for phase 3. The interim analyses permit stopping as soon as a predefined signal is detected, which can accelerate decision making. Both features can reduce the size and duration of the trial. This study design can mitigate some of the risks associated with advancing to phase 3 in the absence of data demonstrating clinical efficacy. Limitations to the approach are discussed.
Sequential therapy in metastatic clear cell renal carcinoma: TKI-TKI vs TKI-mTOR.
Felici, Alessandra; Bria, Emilio; Tortora, Giampaolo; Cognetti, Francesco; Milella, Michele
2012-12-01
With seven targeted agents, directed against the VEGF/VEGF receptor (VEGFR) axis or the mTOR pathway, approved for the treatment of metastatic renal cell carcinoma and more active agents in advanced phase of clinical testing, questions have arisen with regard to their optimal use, either in combination or in sequence. One of the most compelling (and debated) issues is whether continued VEGF/VEGFR inhibition with agents hitting the same targets (TKI-TKI) affords better results than switching mechanisms of action by alternating VEGFR and mTOR inhibition (TKI-mTOR). In this article, the authors review the (little) available evidence coming from randomized Phase III clinical trials and try to fill in the (many) remaining gaps using evidence from small-size, single-arm Phase II studies and retrospective series, as well as reviewing preclinical evidence supporting either strategy.
Trophic or full nutritional support?
Arabi, Yaseen M; Al-Dorzi, Hasan M
2018-06-04
Full nutritional support during the acute phase of critical illness has traditionally been recommended to reduce catabolism and prevent malnutrition. Approaches to achieve full nutrition include early initiation of nutritional support, targeting full nutritional requirement as soon as possible and initiation of supplemental parenteral nutrition when enteral nutrition does not reach the target. Existing evidence supports early enteral nutrition over delayed enteral nutrition or early parenteral nutrition. Recent randomized controlled trials have demonstrated that permissive underfeeding or trophic feeding is associated with similar outcomes compared with full feeding in the acute phase of critical illness. In patients with refeeding syndrome, patients with high nutritional risk and patients with shock, early enteral nutrition targeting full nutritional targets may be associated with worse outcomes compared with less aggressive enteral nutrition strategy. A two-phase approach for nutritional support may more appropriately account for the physiologic changes during critical illness than one-phase approach. Further evidence is awaited for the optimal protein amount during critical illness and for feeding patients at high nutritional risk or with acute gastrointestinal injury.
Biberoglu, Ebru H; Tanrıkulu, Filiz; Erdem, Mehmet; Erdem, Ahmet; Biberoglu, Kutay Omer
2016-01-01
Vaginal progesterone (P) has been suggested to be used for luteal phase support (LPS) in controlled ovarian stimulation (COH)-intrauterine insemination (IUI) cycles, however, no concensus exists about the best P dose. Therefore, considering the fecundability rate as the primary end point, our main objective was to find the optimal dose of P in COH-IUI cycles, comparing the two groups of women, each of which comprised of 100 women either on 300 mg or 600 mg of intravaginal P tablets, in a prospective randomized study design. The mean age of the women, duration of infertility, basal and day of hCG injection hormone levels in the female and sperm parameters were similar in the two study groups. Also, duration and dose of gonadotropin given, number of follicles, endometrial thickness, the total, ongoing and multiple pregnancy rates were comparable in both groups. We, therefore, claim that 300 mg of intravaginal micronized P should be the maximum dose of LPS in IUI cycles.
Demery, Mounira El; Thézenas, Simon; Pouessel, Damien; Culine, Stéphane
2012-02-01
Cisplatin is the backbone of chemotherapeutic regimens used in the treatment of advanced transitional cell carcinoma of the urothelium. However, about 50% of patients cannot be administered cisplatin because of impaired renal functions. A review of the different approaches that have been developed in this patient population was performed through a Medline search from 1 January 1998 to 31 December 2010. Twenty-six studies including 25 phase II and one randomized phase II/III studies were analyzed. All regimens, except one, were based on gemcitabine and/or carboplatin and/or paclitaxel. Only five (20%) out of 25 phase II studies actually include homogeneous patients with an impaired renal function defined by a creatinine clearance below 60 ml/min. One hundred and eight patients with a median creatinine clearance ranging from 28 to 48 ml/min received four different chemotherapy regimens including one to four drugs. The results showed the response rates to vary from 24 to 56% and survival to range from 7 to 15 months. No standard chemotherapy can be recommended from literature data. Future randomized studies will have to solve the following questions: what is the optimal definition of cisplatin eligibility? Which platinum salt should be used? Is a platinum salt necessary? How many drugs should be delivered?
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-10-24
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence.
Chen, Ke; Feng, Yijun; Yang, Zhongjie; Cui, Li; Zhao, Junming; Zhu, Bo; Jiang, Tian
2016-01-01
Ultrathin metasurface compromising various sub-wavelength meta-particles offers promising advantages in controlling electromagnetic wave by spatially manipulating the wavefront characteristics across the interface. The recently proposed digital coding metasurface could even simplify the design and optimization procedures due to the digitalization of the meta-particle geometry. However, current attempts to implement the digital metasurface still utilize several structural meta-particles to obtain certain electromagnetic responses, and requiring time-consuming optimization especially in multi-bits coding designs. In this regard, we present herein utilizing geometric phase based single structured meta-particle with various orientations to achieve either 1-bit or multi-bits digital metasurface. Particular electromagnetic wave scattering patterns dependent on the incident polarizations can be tailored by the encoded metasurfaces with regular sequences. On the contrast, polarization insensitive diffusion-like scattering can also been successfully achieved by digital metasurface encoded with randomly distributed coding sequences leading to substantial suppression of backward scattering in a broadband microwave frequency. The proposed digital metasurfaces provide simple designs and reveal new opportunities for controlling electromagnetic wave scattering with or without polarization dependence. PMID:27775064
NASA Astrophysics Data System (ADS)
Murakami, Hisashi; Gunji, Yukio-Pegio
2017-07-01
Although foraging patterns have long been predicted to optimally adapt to environmental conditions, empirical evidence has been found in recent years. This evidence suggests that the search strategy of animals is open to change so that animals can flexibly respond to their environment. In this study, we began with a simple computational model that possesses the principal features of an intermittent strategy, i.e., careful local searches separated by longer steps, as a mechanism for relocation, where an agent in the model follows a rule to switch between two phases, but it could misunderstand this rule, i.e., the agent follows an ambiguous switching rule. Thanks to this ambiguity, the agent's foraging strategy can continuously change. First, we demonstrate that our model can exhibit an optimal change of strategy from Brownian-type to Lévy-type depending on the prey density, and we investigate the distribution of time intervals for switching between the phases. Moreover, we show that the model can display higher search efficiency than a correlated random walk.
Kilowatt high-efficiency narrow-linewidth monolithic fiber amplifier operating at 1034 nm
NASA Astrophysics Data System (ADS)
Naderi, Nader A.; Flores, Angel; Anderson, Brian M.; Rowland, Ken; Dajani, Iyad
2016-03-01
Power scaling investigation of a narrow-linewidth, Ytterbium-doped all-fiber amplifier operating at 1034 nm is presented. Nonlinear stimulated Brillouin scattering (SBS) effects were suppressed through the utilization of an external phase modulation technique. Here, the power amplifier was seeded with a spectrally broadened master oscillator and the results were compared using both pseudo-random bit sequence (PRBS) and white noise source (WNS) phase modulation formats. By utilizing an optical band pass filter as well as optimizing the length of fiber used in the pre-amplifier stages, we were able to appreciably suppress unwanted amplified spontaneous emission (ASE). Notably, through PRBS phase modulation, greater than two-fold enhancement in threshold power was achieved when compared to the WNS modulated case. Consequently, by further optimizing both the power amplifier length and PRBS pattern at a clock rate of 3.5 GHz, we demonstrated 1 kilowatt of power with a slope efficiency of 81% and an overall ASE content of less than 1%. Beam quality measurements at 1 kilowatt provided near diffraction-limited operation (M2 < 1.2) with no sign of modal instability. To the best of our knowledge, the power scaling results achieved in this work represent the highest power reported for a spectrally narrow all-fiber amplifier operating at < 1040 nm in Yb-doped silica-based fiber.
Identification of terrain cover using the optimum polarimetric classifier
NASA Technical Reports Server (NTRS)
Kong, J. A.; Swartz, A. A.; Yueh, H. A.; Novak, L. M.; Shin, R. T.
1988-01-01
A systematic approach for the identification of terrain media such as vegetation canopy, forest, and snow-covered fields is developed using the optimum polarimetric classifier. The covariance matrices for various terrain cover are computed from theoretical models of random medium by evaluating the scattering matrix elements. The optimal classification scheme makes use of a quadratic distance measure and is applied to classify a vegetation canopy consisting of both trees and grass. Experimentally measured data are used to validate the classification scheme. Analytical and Monte Carlo simulated classification errors using the fully polarimetric feature vector are compared with classification based on single features which include the phase difference between the VV and HH polarization returns. It is shown that the full polarimetric results are optimal and provide better classification performance than single feature measurements.
Data transmission system and method
NASA Technical Reports Server (NTRS)
Bruck, Jehoshua (Inventor); Langberg, Michael (Inventor); Sprintson, Alexander (Inventor)
2010-01-01
A method of transmitting data packets, where randomness is added to the schedule. Universal broadcast schedules using encoding and randomization techniques are also discussed, together with optimal randomized schedules and an approximation algorithm for finding near-optimal schedules.
Influence maximization in complex networks through optimal percolation
NASA Astrophysics Data System (ADS)
Morone, Flaviano; Makse, Hernán A.
2015-08-01
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Influence maximization in complex networks through optimal percolation.
Morone, Flaviano; Makse, Hernán A
2015-08-06
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.
Wang, Haiyang; Yu, Xiaoqing; Fan, Yun
2017-06-20
With the breakthroughs achieved of programmed death-1 (PD-1)/PD-L1 inhibitors monotherapy as first-line and second-line treatment in advanced non-small cell lung cancer (NSCLC), the treatment strategy is gradually evolving and optimizing. Immune combination therapy expands the benefit population and improves the curative effect. A series of randomized phase III trials are ongoing. In this review, we discuss the prospect and current situation of immune checkpoint inhibitors in first-line treatment in advanced NSCLC patients.
Gold, Linda Stein; Lebwohl, Mark G; Sugarman, Jeffrey L; Pariser, David M; Lin, Tina; Martin, Gina; Pillai, Radhakrishnan; Israel, Robert; Ramakrishna, Tage
2018-03-31
Topical corticosteroids are the mainstay of psoriasis treatment; long-term safety concerns limit use. Combination with tazarotene may optimize efficacy, minimizing safety/tolerability concerns, In patients with moderate-to-severe plaque psoriasis treated with HP/TAZ lotion, improvement is noted within 2 weeks with few adverse effects observed after 8 weeks., HP/TAZ lotion may provide a realistic topical option for psoriasis management. Copyright © 2018. Published by Elsevier Inc.
Lee, Keun-Wook; Zang, Dae Young; Ryu, Min-Hee; Kim, Ki Hyang; Kim, Mi-Jung; Han, Hye Sook; Koh, Sung Ae; Park, Jin Hyun; Kim, Jin Won; Nam, Byung-Ho; Choi, In Sil
2017-12-01
The combination of a fluoropyrimidine [5-fluorouracil (5-FU), capecitabine, or S-1] with a platinum analog (cisplatin or oxaliplatin) is the most widely accepted first-line chemotherapy regimen for metastatic or recurrent advanced gastric cancer (AGC), based on the results of clinical trials. However, there is little evidence to guide chemotherapy for elderly patients with AGC because of under-representation of this age group in clinical trials. Thus, the aim of this study is to determine the optimal chemotherapy regimen for elderly patients with AGC by comparing the efficacies and safeties of combination therapy versus monotherapy as first-line chemotherapy. This study is a randomized, controlled, multicenter, phase III trial. A total of 246 elderly patients (≥70 years old) with metastatic or recurrent AGC who have not received previous palliative chemotherapy will be randomly allocated to a combination therapy group or a monotherapy group. Patients randomized to the combination therapy group will receive fluoropyrimidine plus platinum combination chemotherapy (capecitabine/cisplatin, S-1/cisplatin, capecitabine/oxaliplatin, or 5-FU/oxaliplatin), and those randomized to the monotherapy group will receive fluoropyrimidine monotherapy (capecitabine, S-1, or 5-FU). The primary outcome is the overall survival of patients in each treatment group. The secondary outcomes include progression-free survival, response rate, quality of life, and safety. We are conducting this pragmatic trial to determine whether elderly patients with AGC will obtain the same benefit from chemotherapy as younger patients. We expect that this study will help guide decision-making for the optimal treatment of elderly patients with AGC.
Hudson, James I; McElroy, Susan L; Ferreira-Cornwell, M Celeste; Radewonuk, Jana; Gasior, Maria
2017-09-01
The ability of pharmacotherapies to prevent relapse and maintain efficacy with long-term treatment in psychiatric conditions is important. To assess lisdexamfetamine dimesylate maintenance of efficacy in adults with moderate to severe binge-eating disorder. A multinational, phase 3, double-blind, placebo-controlled, randomized withdrawal study including 418 participants was conducted at 49 clinical research study sites from January 27, 2014, to April 8, 2015. Eligible adults met DSM-IV-R binge-eating disorder criteria and had moderate to severe binge eating disorder (≥3 binge-eating days per week for 14 days before open-label baseline; Clinical Global Impressions-Severity [CGI-S] scores ≥4 [moderate severity] at screening and open-label baseline). Following a 12-week, open-label phase (dose optimization, 4 weeks [lisdexamfetamine dimesylate, 50 or 70 mg]; dose maintenance, 8 weeks), lisdexamfetamine responders (≤1 binge eating day per week for 4 consecutive weeks and CGI-S scores ≤2 at week 12) were randomized to placebo or continued lisdexamfetamine during a 26-week, double-blind, randomized withdrawal phase. Lisdexamfetamine administration. The primary outcome variable, time to relapse (≥2 binge-eating days per week for 2 consecutive weeks and ≥2-point CGI-S score increases from randomized withdrawal baseline), was analyzed using a log-rank test (primary analysis); the analysis was stratified for dichotomized 4-week cessation status. Safety assessments included treatment-emergent adverse events. Of the 418 participants enrolled in the open-label phase of the study, 411 (358 [87.1%] women; mean [SD] age, 38.3 [10.4] years) were included in the safety analysis set. Of 275 randomized lisdexamfetamine responders (placebo, n = 138; lisdexamfetamine, n = 137), the observed proportions of participants meeting relapse criteria were 3.7% (5 of 136) for lisdexamfetamine and 32.1% (42 of 131) for placebo. Lisdexamfetamine demonstrated superiority over placebo on the log-rank test (χ21, 40.37; P < .001) for time to relapse; the hazard ratio, based on a Cox proportional hazards model for lisdexamfetamine vs placebo, was 0.09 (95% CI, 0.04-0.23). The treatment-emergent adverse events observed were generally consistent with the known profile of lisdexamfetamine. Risk of binge-eating relapse over 6 months was lower in participants continuing lisdexamfetamine than in those randomized to placebo. The hazard for relapse was lower with lisdexamfetamine than placebo. clinicaltrials.gov Identifier: NCT02009163.
The N-policy for an unreliable server with delaying repair and two phases of service
NASA Astrophysics Data System (ADS)
Choudhury, Gautam; Ke, Jau-Chuan; Tadj, Lotfi
2009-09-01
This paper deals with an MX/G/1 with an additional second phase of optional service and unreliable server, which consist of a breakdown period and a delay period under N-policy. While the server is working with any phase of service, it may break down at any instant and the service channel will fail for a short interval of time. Further concept of the delay time is also introduced. If no customer arrives during the breakdown period, the server becomes idle in the system until the queue size builds up to a threshold value . As soon as the queue size becomes at least N, the server immediately begins to serve the first phase of regular service to all the waiting customers. After the completion of which, only some of them receive the second phase of the optional service. We derive the queue size distribution at a random epoch and departure epoch as well as various system performance measures. Finally we derive a simple procedure to obtain optimal stationary policy under a suitable linear cost structure.
Review of Random Phase Encoding in Volume Holographic Storage
Su, Wei-Chia; Sun, Ching-Cherng
2012-01-01
Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.
Interplay between intrinsic noise and the stochasticity of the cell cycle in bacterial colonies.
Canela-Xandri, Oriol; Sagués, Francesc; Buceta, Javier
2010-06-02
Herein we report on the effects that different stochastic contributions induce in bacterial colonies in terms of protein concentration and production. In particular, we consider for what we believe to be the first time cell-to-cell diversity due to the unavoidable randomness of the cell-cycle duration and its interplay with other noise sources. To that end, we model a recent experimental setup that implements a protein dilution protocol by means of division events to characterize the gene regulatory function at the single cell level. This approach allows us to investigate the effect of different stochastic terms upon the total randomness experimentally reported for the gene regulatory function. In addition, we show that the interplay between intrinsic fluctuations and the stochasticity of the cell-cycle duration leads to different constructive roles. On the one hand, we show that there is an optimal value of protein concentration (alternatively an optimal value of the cell cycle phase) such that the noise in protein concentration attains a minimum. On the other hand, we reveal that there is an optimal value of the stochasticity of the cell cycle duration such that the coherence of the protein production with respect to the colony average production is maximized. The latter can be considered as a novel example of the recently reported phenomenon of diversity induced resonance. Copyright (c) 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
An evolutionary strategy based on partial imitation for solving optimization problems
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2016-12-01
In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define 'partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted 'partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
Drag, but not buoyancy, affects swim speed in captive Steller sea lions
Suzuki, Ippei; Sato, Katsufumi; Fahlman, Andreas; Naito, Yasuhiko; Miyazaki, Nobuyuki; Trites, Andrew W.
2014-01-01
ABSTRACT Swimming at an optimal speed is critical for breath-hold divers seeking to maximize the time they can spend foraging underwater. Theoretical studies have predicted that the optimal swim speed for an animal while transiting to and from depth is independent of buoyancy, but is dependent on drag and metabolic rate. However, this prediction has never been experimentally tested. Our study assessed the effects of buoyancy and drag on the swim speed of three captive Steller sea lions (Eumetopias jubatus) that made 186 dives. Our study animals were trained to dive to feed at fixed depths (10–50 m) under artificially controlled buoyancy and drag conditions. Buoyancy and drag were manipulated using a pair of polyvinyl chloride (PVC) tubes attached to harnesses worn by the sea lions, and buoyancy conditions were designed to fall within the natural range of wild animals (∼12–26% subcutaneous fat). Drag conditions were changed with and without the PVC tubes, and swim speeds were recorded and compared during descent and ascent phases using an accelerometer attached to the harnesses. Generalized linear mixed-effect models with the animal as the random variable and five explanatory variables (body mass, buoyancy, dive depth, dive phase, and drag) showed that swim speed was best predicted by two variables, drag and dive phase (AIC = −139). Consistent with a previous theoretical prediction, the results of our study suggest that the optimal swim speed of Steller sea lions is a function of drag, and is independent of dive depth and buoyancy. PMID:24771620
Liu, Chenbin; Schild, Steven E; Chang, Joe Y; Liao, Zhongxing; Korte, Shawn; Shen, Jiajian; Ding, Xiaoning; Hu, Yanle; Kang, Yixiu; Keole, Sameer R; Sio, Terence T; Wong, William W; Sahoo, Narayan; Bues, Martin; Liu, Wei
2018-06-01
To investigate how spot size and spacing affect plan quality, robustness, and interplay effects of robustly optimized intensity modulated proton therapy (IMPT) for lung cancer. Two robustly optimized IMPT plans were created for 10 lung cancer patients: first by a large-spot machine with in-air energy-dependent large spot size at isocenter (σ: 6-15 mm) and spacing (1.3 σ), and second by a small-spot machine with in-air energy-dependent small spot size (σ: 2-6 mm) and spacing (5 mm). Both plans were generated by optimizing radiation dose to internal target volume on averaged 4-dimensional computed tomography scans using an in-house-developed IMPT planning system. The dose-volume histograms band method was used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effects with randomized starting phases for each field per fraction. Patient anatomy voxels were mapped phase-to-phase via deformable image registration, and doses were scored using in-house-developed software. Dose-volume histogram indices, including internal target volume dose coverage, homogeneity, and organs at risk (OARs) sparing, were compared using the Wilcoxon signed-rank test. Compared with the large-spot machine, the small-spot machine resulted in significantly lower heart and esophagus mean doses, with comparable target dose coverage, homogeneity, and protection of other OARs. Plan robustness was comparable for targets and most OARs. With interplay effects considered, significantly lower heart and esophagus mean doses with comparable target dose coverage and homogeneity were observed using smaller spots. Robust optimization with a small spot-machine significantly improves heart and esophagus sparing, with comparable plan robustness and interplay effects compared with robust optimization with a large-spot machine. A small-spot machine uses a larger number of spots to cover the same tumors compared with a large-spot machine, which gives the planning system more freedom to compensate for the higher sensitivity to uncertainties and interplay effects for lung cancer treatments. Copyright © 2018 Elsevier Inc. All rights reserved.
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Motta, Mario; Zhang, Shiwei
2017-11-14
We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.
Digital correlation detector for low-cost Omega navigation
NASA Technical Reports Server (NTRS)
Chamberlin, K. A.
1976-01-01
Techniques to lower the cost of using the Omega global navigation network with phase-locked loops (PLL) were developed. The technique that was accepted as being "optimal" is called the memory-aided phase-locked loop (MAPLL) since it allows operation on all eight Omega time slots with one PLL through the implementation of a random access memory. The receiver front-end and the signals that it transmits to the PLL were first described. A brief statistical analysis of these signals was then made to allow a rough comparison between the front-end presented in this work and a commercially available front-end to be made. The hardware and theory of application of the MAPLL were described, ending with an analysis of data taken with the MAPLL. Some conclusions and recommendations were also given.
Aronson, Ronnie; Cohen, Ohad; Conget, Ignacio; Runzis, Sarah; Castaneda, Javier; de Portu, Simona; Lee, Scott; Reznik, Yves
2014-07-01
In insulin-requiring type 2 diabetes patients, current insulin therapy approaches such as basal-alone or basal-bolus multiple daily injections (MDI) have not consistently provided achievement of optimal glycemic control. Previous studies have suggested a potential benefit of continuous subcutaneous insulin infusion (CSII) in these patients. The OpT2mise study is a multicenter, randomized, trial comparing CSII with MDI in a large cohort of subjects with evidence of persistent hyperglycemia despite previous MDI therapy. Subjects were enrolled into a run-in period for optimization of their MDI insulin regimen. Subjects showing persistent hyperglycemia (glycated hemoglobin [HbA1c] ≥8% and ≤12%) were then randomly assigned to CSII or continuing an MDI regimen for a 6-month phase followed by a single crossover of the MDI arm, switching to CSII. The primary end point is the between-group difference in mean change in HbA1c from baseline to 6 months. Secondary end points include change in mean 24-h glucose values, area under the curve and time spent in hypoglycemia and hyperglycemia, measures of glycemic excursions, change in postprandial hyperglycemia, and evaluation of treatment satisfaction. Safety end points include hypoglycemia, hospital admissions, and emergency room visits. When subject enrollment was completed in May 2013, 495 subjects had been enrolled in the study. The study completion for the primary end point is expected in January 2014. OpT2mise will represent the largest studied homogeneous cohort of type 2 diabetes patients with persistent hyperglycemia despite optimized MDI therapy. OpT2mise will help define the role of CSII in insulin intensification and define its safety, rate of hypoglycemia, patient adherence, and patient satisfaction.
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
1998-05-01
Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
NASA Astrophysics Data System (ADS)
Reynolds, A. M.
2008-04-01
A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
The random fractional matching problem
NASA Astrophysics Data System (ADS)
Lucibello, Carlo; Malatesta, Enrico M.; Parisi, Giorgio; Sicuro, Gabriele
2018-05-01
We consider two formulations of the random-link fractional matching problem, a relaxed version of the more standard random-link (integer) matching problem. In one formulation, we allow each node to be linked to itself in the optimal matching configuration. In the other one, on the contrary, such a link is forbidden. Both problems have the same asymptotic average optimal cost of the random-link matching problem on the complete graph. Using a replica approach and previous results of Wästlund (2010 Acta Mathematica 204 91–150), we analytically derive the finite-size corrections to the asymptotic optimal cost. We compare our results with numerical simulations and we discuss the main differences between random-link fractional matching problems and the random-link matching problem.
NASA Astrophysics Data System (ADS)
Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain
2009-09-01
In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
NASA Astrophysics Data System (ADS)
Fourrate, K.; Loulidi, M.
2006-01-01
We suggest a disordered traffic flow model that captures many features of traffic flow. It is an extension of the Nagel-Schreckenberg (NaSch) stochastic cellular automata for single line vehicular traffic model. It incorporates random acceleration and deceleration terms that may be greater than one unit. Our model leads under its intrinsic dynamics, for high values of braking probability pr, to a constant flow at intermediate densities without introducing any spatial inhomogeneities. For a system of fast drivers pr→0, the model exhibits a density wave behavior that was observed in car following models with optimal velocity. The gap of the disordered model we present exhibits, for high values of pr and random deceleration, at a critical density, a power law distribution which is a hall mark of a self organized criticality phenomena.
Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A
2018-05-28
To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
Arnold, Lesley M; Arsenault, Pierre; Huffman, Cynthia; Patrick, Jeffrey L; Messig, Michael; Chew, Marci L; Sanin, Luis; Scavone, Joseph M; Pauer, Lynne; Clair, Andrew G
2014-10-01
Safety and efficacy of a once daily controlled-released (CR) formulation of pregabalin was evaluated in patients with fibromyalgia using a placebo-controlled, randomized withdrawal design. This multicenter study included 6 week single-blind pregabalin CR treatment followed by 13 week double-blind treatment with placebo or pregabalin CR. The starting dose of 165 mg/day was escalated during the first 3 weeks, up to 495 mg/day based on efficacy and tolerability. Patients with ≥50% reduction in average daily pain score at the end of the single-blind phase were randomized to continue pregabalin CR at the optimized dose (330-495 mg/day) or to placebo. The primary endpoint was time to loss of therapeutic response (LTR), defined as <30% pain reduction relative to single-blind baseline or discontinuation owing to lack of efficacy or adverse event (AE). Secondary endpoints included measures of pain severity, global assessment, functional status, tiredness/fatigue, and sleep. ClinicalTrials.gov NCT01271933. A total of 441 patients entered the single-blind phase, and 63 were randomized to pregabalin CR and 58 to placebo. The median time to LTR (Kaplan-Meier analysis) was significantly longer in the pregabalin CR group than placebo (58 vs. 22 days, p = 0.02). By trial end, 34/63 (54.0%) pregabalin CR and 41/58 (70.7%) placebo patients experienced LTR. Significantly more patients reported 'benefit from treatment' (Benefit, Satisfaction, and Willingness to Continue Scale) in the pregabalin CR group; no other secondary endpoints were statistically significant. Most AEs were mild to moderate in severity (most frequent: dizziness, somnolence). The percentage of pregabalin CR patients discontinuing because of AEs was 12.2% and 4.8% in the single-blind and double-blind phases, respectively (placebo, 0%). Time to LTR was significantly longer with pregabalin CR versus placebo in fibromyalgia patients who initially showed improvement with pregabalin CR, indicating maintenance of response. Pregabalin CR was well tolerated in most patients. Generalizability may be limited by study duration and selective population.
Random phase encoding for optical security
NASA Astrophysics Data System (ADS)
Wang, RuiKang K.; Watson, Ian A.; Chatwin, Christopher R.
1996-09-01
A new optical encoding method for security applications is proposed. The encoded image (encrypted into the security products) is merely a random phase image statistically and randomly generated by a random number generator using a computer, which contains no information from the reference pattern (stored for verification) or the frequency plane filter (a phase-only function for decoding). The phase function in the frequency plane is obtained using a modified phase retrieval algorithm. The proposed method uses two phase-only functions (images) at both the input and frequency planes of the optical processor leading to maximum optical efficiency. Computer simulation shows that the proposed method is robust for optical security applications.
Valine needs in starting and growing Cobb (500) broilers.
Tavernari, F C; Lelis, G R; Vieira, R A; Rostagno, H S; Albino, L F T; Oliveira Neto, A R
2013-01-01
Two independent experiments were conducted with male Cobb × Cobb 500 broilers to determine the optimal valine-to-digestible-lysine ratio for broiler development. We conducted a randomized block experiment with 7 treatments, each with 8 replicates of 25 starter birds (8 to 21 d of age) and 20 finisher (30 to 43 d of age) birds. To prevent any excess of digestible lysine, 93% of the recommended level of digestible lysine was used to evaluate the valine-to-lysine ratio. The utilized levels of dietary digestible lysine were 10.7 and 9.40 g/kg for the starting and growing phases, respectively. A control diet with 100% of the recommended level of lysine and an adequate valine-to-lysine ratio was also used. The feed intake, weight gain, feed conversion ratio, and carcass parameters were evaluated. The treatments had no significant effect on the feed intakes or carcass parameters in the starter and finisher phases. However, during both of the studied phases, we observed a quadratic effect on weight gain and the feed conversion ratio. The broilers of both phases that were fed test diets with the lower valine-to-lysine (Val/Lys) ratio had poorer performance compared with those broilers fed control diets. However, when higher Val/Lys ratios were used for the starting and growing broilers that were fed test diets, the 2 groups had similar performance. During the starting phase, in broilers that were fed a higher Val/Lys ratio, weight gain, and the feed conversion ratio improved by 5.5% compared with broilers fed the basal diets. The broilers in the growing phase also had improved performance (by 7 to 8%) when the test diets had higher Val/Lys ratios. Based on the analysis of the starter phase data, we concluded that the optimal digestible Val/Lys ratio for Cobb × Cobb 500 broilers is 77%, whereas for birds in the finisher phase (30 to 43 d of age), a digestible Val/Lys ratio of 76% is suggested.
Selecting Random Distributed Elements for HIFU using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yufeng
2011-09-01
As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.
Gomez, A; Bernardoni, N; Rieman, J; Dusick, A; Hartshorn, R; Read, D H; Socha, M T; Cook, N B; Döpfer, D
2014-10-01
A balanced, parallel-group, single-blinded randomized efficacy study divided into 2 periods was conducted to evaluate the effect of a premix containing higher than typically recommended levels of organic trace minerals and iodine (HOTMI) in reducing the incidence of active digital dermatitis (DD) lesions acquired naturally and induced by an experimental infection challenge model. For the natural exposure phase of the study, 120 healthy Holstein steers 5 to 7 mo of age without signs of hoof disease were randomized into 2 groups of 60 animals. The control group was fed a standard trace mineral supplement and the treatment group was fed the HOTMI premix, both for a period of 60 d. On d 60, 15 steers free of macroscopic DD lesions were randomly selected from each group for the challenge phase and transported to an experimental facility, where they were acclimated and then challenged within a DD infection model. The same diet group allocation was maintained during the 60 d of the challenge phase. The primary outcome measured was the development of an active DD lesion greater than 20mm in diameter across its largest dimension. No lesions were identified during the natural exposure phase. During the challenge phase, 55% (11/20) and 30% (6/20) of feet were diagnosed with an active DD lesion in the control and treatment groups, respectively. Diagnosis of DD was confirmed by histopathologic demonstration of invasive Treponema spp. within eroded and hyperplastic epidermis and ulcerated papillary dermis. All DD confirmed lesions had dark-field microscopic features compatible with DD and were positive for Treponema spp. by PCR. As a secondary outcome, the average DD lesion size observed in all feet was also evaluated. Overall mean (standard deviation) lesion size was 17.1 (2.36) mm and 11.1 (3.33) mm for the control and treatment groups, respectively, with this difference being driven by acute DD lesions >20mm. A trend existed for the HOTMI premix to reduce the total DD infection rate and the average size of the experimentally induced lesions. Further research is needed to validate the effect of this intervention strategy in the field and to generate prevention and control measures aimed at optimizing claw health based on nutritional programs. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hybrid computer optimization of systems with random parameters
NASA Technical Reports Server (NTRS)
White, R. C., Jr.
1972-01-01
A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.
Jabbour, Elias; Saglio, Giuseppe; Steegmann, Juan Luis; Shah, Neil P.; Boqué, Concepción; Chuah, Charles; Pavlovsky, Carolina; Mayer, Jiří; Cortes, Jorge; Baccarani, Michele; Kim, Dong-Wook; Bradley-Garelik, M. Brigid; Mohamed, Hesham; Wildgust, Mark; Hochhaus, Andreas
2014-01-01
This analysis explores the impact of early cytogenetic and molecular responses on the outcomes of patients with chronic myeloid leukemia in chronic phase (CML-CP) in the phase 3 DASatinib versus Imatinib Study In treatment-Naive CML patients trial with a minimum follow-up of 3 years. Patients with newly diagnosed CML-CP were randomized to receive 100 mg dasatinib (n = 259) or 400 mg imatinib (n = 260) once daily. The retrospective landmark analysis included patients evaluable at the relevant time point (3, 6, or 12 months). Median time to complete cytogenetic response was 3 vs 6 months with dasatinib vs imatinib. At 3 and 6 months, the proportion of patients with BCR-ABL transcript levels ≤10% was higher in the dasatinib arm. Deeper responses at 3, 6, and 12 months were observed in a higher proportion of patients on dasatinib therapy and were associated with better 3-year progression-free survival and overall survival in both arms. First-line dasatinib resulted in faster and deeper responses compared with imatinib. The achievement of an early molecular response was predictive of improved progression-free survival and overall survival, supporting new milestones for optimal response in patients with early CML-CP treated with tyrosine kinase inhibitors. This study was registered at www.clinicaltrials.gov as NCT00481247. PMID:24311723
Efficacy of Lisdexamfetamine in Adults With Moderate to Severe Binge-Eating Disorder
McElroy, Susan L.; Ferreira-Cornwell, M. Celeste; Radewonuk, Jana; Gasior, Maria
2017-01-01
Importance The ability of pharmacotherapies to prevent relapse and maintain efficacy with long-term treatment in psychiatric conditions is important. Objective To assess lisdexamfetamine dimesylate maintenance of efficacy in adults with moderate to severe binge-eating disorder. Design, Setting, and Participants A multinational, phase 3, double-blind, placebo-controlled, randomized withdrawal study including 418 participants was conducted at 49 clinical research study sites from January 27, 2014, to April 8, 2015. Eligible adults met DSM-IV-R binge-eating disorder criteria and had moderate to severe binge eating disorder (≥3 binge-eating days per week for 14 days before open-label baseline; Clinical Global Impressions−Severity [CGI-S] scores ≥4 [moderate severity] at screening and open-label baseline). Following a 12-week, open-label phase (dose optimization, 4 weeks [lisdexamfetamine dimesylate, 50 or 70 mg]; dose maintenance, 8 weeks), lisdexamfetamine responders (≤1 binge eating day per week for 4 consecutive weeks and CGI-S scores ≤2 at week 12) were randomized to placebo or continued lisdexamfetamine during a 26-week, double-blind, randomized withdrawal phase. Interventions Lisdexamfetamine administration. Main Outcomes and Measures The primary outcome variable, time to relapse (≥2 binge-eating days per week for 2 consecutive weeks and ≥2-point CGI-S score increases from randomized withdrawal baseline), was analyzed using a log-rank test (primary analysis); the analysis was stratified for dichotomized 4-week cessation status. Safety assessments included treatment-emergent adverse events. Results Of the 418 participants enrolled in the open-label phase of the study, 411 (358 [87.1%] women; mean [SD] age, 38.3 [10.4] years) were included in the safety analysis set. Of 275 randomized lisdexamfetamine responders (placebo, n = 138; lisdexamfetamine, n = 137), the observed proportions of participants meeting relapse criteria were 3.7% (5 of 136) for lisdexamfetamine and 32.1% (42 of 131) for placebo. Lisdexamfetamine demonstrated superiority over placebo on the log-rank test (χ21, 40.37; P < .001) for time to relapse; the hazard ratio, based on a Cox proportional hazards model for lisdexamfetamine vs placebo, was 0.09 (95% CI, 0.04-0.23). The treatment-emergent adverse events observed were generally consistent with the known profile of lisdexamfetamine. Conclusions and Relevance Risk of binge-eating relapse over 6 months was lower in participants continuing lisdexamfetamine than in those randomized to placebo. The hazard for relapse was lower with lisdexamfetamine than placebo. Trial Registration clinicaltrials.gov Identifier: NCT02009163 PMID:28700805
Phase equilibria computations of multicomponent mixtures at specified internal energy and volume
NASA Astrophysics Data System (ADS)
Myint, Philip C.; Nichols, Albert L., III; Springer, H. Keo
2017-06-01
Hydrodynamic simulation codes for high-energy density science applications often use internal energy and volume as their working variables. As a result, the codes must determine the thermodynamic state that corresponds to the specified energy and volume by finding the global maximum in entropy. This task is referred to as the isoenergetic-isochoric flash. Solving it for multicomponent mixtures is difficult because one must find not only the temperature and pressure consistent with the energy and volume, but also the number of phases present and the composition of the phases. The few studies on isoenergetic-isochoric flash that currently exist all require the evaluation of many derivatives that can be tedious to implement. We present an alternative approach that is based on a derivative-free method: particle swarm optimization. The global entropy maximum is found by running several instances of particle swarm optimization over different sets of randomly selected points in the search space. For verification, we compare the predicted temperature and pressure to results from the related, but simpler problem of isothermal-isobaric flash. All of our examples involve the equation of state we have recently developed for multiphase mixtures of the energetic materials HMX, RDX, and TNT. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Rong, J. H.; Yi, J. H.
2010-10-01
In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.
Akil, Bisher; Blick, Gary; Hagins, Debbie P; Ramgopal, Moti N; Richmond, Gary J; Samuel, Rafik M; Givens, Naomi; Vavro, Cindy; Song, Ivy H; Wynne, Brian; Ait-Khaled, Mounir
2015-01-01
The Phase III VIKING-3 study demonstrated that dolutegravir (DTG) 50 mg twice daily was efficacious in antiretroviral therapy (ART)-experienced subjects harbouring raltegravir- and/or elvitegravir-resistant HIV-1. VIKING-4 (ING116529) included a placebo-controlled 7-day monotherapy phase to demonstrate that short-term antiviral activity was attributable to DTG. VIKING-4 is a Phase III randomized, double-blind study in therapy-experienced adults with integrase inhibitor (INI)-resistant virus randomized to DTG 50 mg twice daily or placebo while continuing their failing regimen (without raltegravir or elvitegravir) for 7 days (clinicaltrials.gov identifier NCT01568892). At day 8, all subjects switched to open-label DTG 50 mg twice daily and optimized background therapy including ≥1 fully active drug. The primary end point was change from baseline in plasma HIV-1 RNA at day 8. The study population (n=30) was highly ART-experienced with advanced HIV disease. Patients had extensive baseline resistance to all approved antiretroviral classes. Adjusted mean change in HIV-1 RNA at day 8 was -1.06 log10 copies/ml for the DTG arm and 0.10 log10 copies/ml for the placebo arm (treatment difference -1.16 log10 copies/ml [-1.52, -0.80]; P<0.001). Overall, 47% and 57% of subjects had plasma HIV-1 RNA <50 and <400 copies/ml at week 24, and 40% and 53% at week 48, respectively. No discontinuations due to drug-related adverse events occurred in the study. The observed day 8 antiviral activity in this highly treatment-experienced population with INI-resistant HIV-1 was attributable to DTG. Longer-term efficacy (after considering baseline ART resistance) and safety during the open-label phase were in-line with the results of the larger VIKING-3 study.
Quadrupedal galloping control for a wide range of speed via vertical impulse scaling.
Park, Hae-Won; Kim, Sangbae
2015-03-25
This paper presents a bio-inspired quadruped controller that allows variable-speed galloping. The controller design is inspired by observations from biological runners. Quadrupedal animals increase the vertical impulse that is generated by ground reaction forces at each stride as running speed increases and the duration of each stance phase reduces, whereas the swing phase stays relatively constant. Inspired by this observation, the presented controller estimates the required vertical impulse at each stride by applying the linear momentum conservation principle in the vertical direction and prescribes the ground reaction forces at each stride. The design process begins with deriving a planar model from the MIT Cheetah 2 robot. A baseline periodic limit cycle is obtained by optimizing ground reaction force profiles and the temporal gait pattern (timing and duration of gait phases). To stabilize the optimized limit cycle, the obtained limit cycle is converted to a state feedback controller by representing the obtained ground reaction force profiles as functions of the state variable, which is monotonically increasing throughout the gait, adding impedance control around the height and pitch trajectories of the obtained limit cycle and introducing a finite state machine and a pattern stabilizer to enforce the optimized gait pattern. The controller that achieves a stable 3 m s(-1) gallop successfully adapts the speed change by scaling the vertical ground reaction force to match the momentum lost by gravity and adding a simple speed controller that controls horizontal speed. Without requiring additional gait optimization processes, the controller achieves galloping at speeds ranging from 3 m s(-1) to 14.9 m s(-1) while respecting the torque limit of the motor used in the MIT Cheetah 2 robot. The robustness of the controller is verified by demonstrating stable running during various disturbances, including 1.49 m step down and 0.18 m step up, as well as random ground height and model parameter variations.
N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao
2017-05-31
Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.
NASA Astrophysics Data System (ADS)
Chen, Yu; Zou, Jian; Yang, Zi-Yi; Li, Longwu; Li, Hai; Shao, Bin
2016-08-01
The dynamics of N-qubit GHZ state quantum Fisher information (QFI) under phase noise lasers (PNLs) driving is investigated in terms of non-Markovian master equation. We first investigate the non-Markovian dynamics of the QFI of N-qubit GHZ state and show that when the ratio of the PNL rate and the system-environment coupling strength is very small, the oscillations of the QFIs decay slower which corresponds to the non-Markovian region; yet when it becomes large, the QFIs monotonously decay which corresponds to the Markovian region. When the atom number N increases, QFIs in both regions decay faster. We further find that the QFI flow disappears suddenly followed by a sudden birth depending on the ratio of the PNL rate and the system-environment coupling strength and the atom number N, which unveil a fundamental connection between the non-Markovian behaviors and the parameters of system-environment couplings. We discuss two optimal positive operator-valued measures (POVMs) for two different strategies of our model and find the condition of the optimal measurement. At last, we consider the QFI of two atoms with qubit-qubit interaction under random telegraph noises (RTNs).
Cohlen, B J
2009-01-01
World-wide, intrauterine insemination (IUI) is still one of the most applied techniques to enhance the probability of conception in couples with longstanding subfertility. The outcome of this treatment option depends on many confounding factors. One of the confounding factors receiving little attention is the quality of the luteal phase. From IVF studies, it is known that ovarian stimulation causes luteal phase deficiency. Based on the best available evidence, this short review summarizes the indications for mild ovarian stimulation combined with IUI and the optimal stimulation programme. While it has been established that stimulated IVF/intracytoplasmic sperm injection cycles have deficient luteal phases, the question remains whether the quality of the luteal phase when only two or three corpora lutea are present (as is the case in stimulated IUI cycles) is impaired as well. There are too few large non-IVF trials studying luteal phase quality to answer this question. Recently a randomized trial has been published that investigated luteal phase support in an IUI programme. This study is discussed in detail. It is recommended to apply luteal phase support in stimulated IUI cycles only when proven costeffective. Further trials are mandatory to investigate both endometrial and hormonal profile changes in the luteal phase after mild ovarian stimulation, and the cost-effectiveness of luteal support in IUI programmes.
Solid oxide fuel cell anode image segmentation based on a novel quantum-inspired fuzzy clustering
NASA Astrophysics Data System (ADS)
Fu, Xiaowei; Xiang, Yuhan; Chen, Li; Xu, Xin; Li, Xi
2015-12-01
High quality microstructure modeling can optimize the design of fuel cells. For three-phase accurate identification of Solid Oxide Fuel Cell (SOFC) microstructure, this paper proposes a novel image segmentation method on YSZ/Ni anode Optical Microscopic (OM) images. According to Quantum Signal Processing (QSP), the proposed approach exploits a quantum-inspired adaptive fuzziness factor to adaptively estimate the energy function in the fuzzy system based on Markov Random Filed (MRF). Before defuzzification, a quantum-inspired probability distribution based on distance and gray correction is proposed, which can adaptively adjust the inaccurate probability estimation of uncertain points caused by noises and edge points. In this study, the proposed method improves accuracy and effectiveness of three-phase identification on the micro-investigation. It provides firm foundation to investigate the microstructural evolution and its related properties.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bandlow, Alisa; Durfee, Justin David; Frazier, Christopher Rawls
2016-05-01
This requirements document serves as an addendum to the Contingency Contractor Optimization Phase 2, Requirements Document [1] and Phase 3 Requirements Document [2]. The Phase 2 Requirements document focused on the high-level requirements for the tool. The Phase 3 Requirements document provided more detailed requirements to which the engineering prototype was built in Phase 3. This document will provide detailed requirements for features and enhancements being added to the production pilot in the Phase 3 Sustainment.
NASA Astrophysics Data System (ADS)
Perugini, G.; Ricci-Tersenghi, F.
2018-01-01
We first present an empirical study of the Belief Propagation (BP) algorithm, when run on the random field Ising model defined on random regular graphs in the zero temperature limit. We introduce the notion of extremal solutions for the BP equations, and we use them to fix a fraction of spins in their ground state configuration. At the phase transition point the fraction of unconstrained spins percolates and their number diverges with the system size. This in turn makes the associated optimization problem highly non trivial in the critical region. Using the bounds on the BP messages provided by the extremal solutions we design a new and very easy to implement BP scheme which is able to output a large number of stable fixed points. On one hand this new algorithm is able to provide the minimum energy configuration with high probability in a competitive time. On the other hand we found that the number of fixed points of the BP algorithm grows with the system size in the critical region. This unexpected feature poses new relevant questions about the physics of this class of models.
Optimization of pelvic heating rate distributions with electromagnetic phased arrays.
Paulsen, K D; Geimer, S; Tang, J; Boyse, W E
1999-01-01
Deep heating of pelvic tumours with electromagnetic phased arrays has recently been reported to improve local tumour control when combined with radiotherapy in a randomized clinical trial despite the fact that rather modest elevations in tumour temperatures were achieved. It is reasonable to surmise that improvements in temperature elevation could lead to even better tumour response rates, motivating studies which attempt to explore the parameter space associated with heating rate delivery in the pelvis. Computational models which are based on detailed three-dimensional patient anatomy are readily available and lend themselves to this type of investigation. In this paper, volume average SAR is optimized in a predefined target volume subject to a maximum allowable volume average SAR outside this zone. Variables under study include the position of the target zone, the number and distribution of radiators and the applicator operating frequency. The results show a clear preference for increasing frequency beyond 100 MHz, which is typically applied clinically, especially as the number of antennae increases. Increasing both the number of antennae per circumferential distance around the patient, as well as the number of independently functioning antenna bands along the patient length, is important in this regard, although improvements were found to be more significant with increasing circumferential antenna density. However, there is considerable site specific variation and cases occur where lower numbers of antennae spread out over multiple longitudinal bands are more advantageous. The results presented here have been normalized relative to an optimized set of antenna array amplitudes and phases operating at 100 MHz which is a common clinical configuration. The intent is to provide some indications of avenues for improving the heating rate distributions achievable with current technology.
Yoon, Soon Ho; Jung, Julip; Hong, Helen; Park, Eun Ah; Lee, Chang Hyun; Lee, Youkyung; Jin, Kwang Nam; Choo, Ji Yung; Lee, Nyoung Keun
2014-01-01
Objective To evaluate the technical feasibility, performance, and interobserver agreement of a computer-aided classification (CAC) system for regional ventilation at two-phase xenon-enhanced CT in patients with chronic obstructive pulmonary disease (COPD). Materials and Methods Thirty-eight patients with COPD underwent two-phase xenon ventilation CT with resulting wash-in (WI) and wash-out (WO) xenon images. The regional ventilation in structural abnormalities was visually categorized into four patterns by consensus of two experienced radiologists who compared the xenon attenuation of structural abnormalities with that of adjacent normal parenchyma in the WI and WO images, and it served as the reference. Two series of image datasets of structural abnormalities were randomly extracted for optimization and validation. The proportion of agreement on a per-lesion basis and receiver operating characteristics on a per-pixel basis between CAC and reference were analyzed for optimization. Thereafter, six readers independently categorized the regional ventilation in structural abnormalities in the validation set without and with a CAC map. Interobserver agreement was also compared between assessments without and with CAC maps using multirater κ statistics. Results Computer-aided classification maps were successfully generated in 31 patients (81.5%). The proportion of agreement and the average area under the curve of optimized CAC maps were 94% (75/80) and 0.994, respectively. Multirater κ value was improved from moderate (κ = 0.59; 95% confidence interval [CI], 0.56-0.62) at the initial assessment to excellent (κ = 0.82; 95% CI, 0.79-0.85) with the CAC map. Conclusion Our proposed CAC system demonstrated the potential for regional ventilation pattern analysis and enhanced interobserver agreement on visual classification of regional ventilation. PMID:24843245
Yoon, Soon Ho; Goo, Jin Mo; Jung, Julip; Hong, Helen; Park, Eun Ah; Lee, Chang Hyun; Lee, Youkyung; Jin, Kwang Nam; Choo, Ji Yung; Lee, Nyoung Keun
2014-01-01
To evaluate the technical feasibility, performance, and interobserver agreement of a computer-aided classification (CAC) system for regional ventilation at two-phase xenon-enhanced CT in patients with chronic obstructive pulmonary disease (COPD). Thirty-eight patients with COPD underwent two-phase xenon ventilation CT with resulting wash-in (WI) and wash-out (WO) xenon images. The regional ventilation in structural abnormalities was visually categorized into four patterns by consensus of two experienced radiologists who compared the xenon attenuation of structural abnormalities with that of adjacent normal parenchyma in the WI and WO images, and it served as the reference. Two series of image datasets of structural abnormalities were randomly extracted for optimization and validation. The proportion of agreement on a per-lesion basis and receiver operating characteristics on a per-pixel basis between CAC and reference were analyzed for optimization. Thereafter, six readers independently categorized the regional ventilation in structural abnormalities in the validation set without and with a CAC map. Interobserver agreement was also compared between assessments without and with CAC maps using multirater κ statistics. Computer-aided classification maps were successfully generated in 31 patients (81.5%). The proportion of agreement and the average area under the curve of optimized CAC maps were 94% (75/80) and 0.994, respectively. Multirater κ value was improved from moderate (κ = 0.59; 95% confidence interval [CI], 0.56-0.62) at the initial assessment to excellent (κ = 0.82; 95% CI, 0.79-0.85) with the CAC map. Our proposed CAC system demonstrated the potential for regional ventilation pattern analysis and enhanced interobserver agreement on visual classification of regional ventilation.
Hubbard, Joleen M.; Mahoney, Michelle R.; Loui, William S.; Roberts, Lewis R.; Smyrk, Thomas C.; Gatalica, Zoran; Borad, Mitesh; Kumar, Shaji; Alberts, Steven R.
2017-01-01
Background Angiogenesis has been a major target of novel drug development in hepatocellular carcinoma (HCC). It is hypothesized that the combination of two antiangiogenic agents, sorafenib and bevacizumab, will provide greater blockade of angiogenesis. Objective To determine the optimal dose, safety, and effectiveness of dual anti-angiogenic therapy with sorafenib and bevacizumab in patients with advanced HCC. Patients and Methods Patients with locally advanced or metastatic HCC not amenable for surgery or liver transplant were eligible. The phase I starting dose level was bevacizumab 1.25 mg/kg day 1 and 15 plus sorafenib 400 mg twice daily (BID) days 1–28. In the phase II portion, patients were randomized to receive bevacizumab and sorafenib at the maximum tolerated dose (MTD) or sorafenib 400 mg BID. Results 17 patients were enrolled in the phase I component. Dose-limiting toxicities included grade 3 hand/foot skin reaction, fatigue, hypertension, alanine/aspartate aminotransferase increase, dehydration, hypophosphatemia, creatinine increase, hypoglycemia, nausea/vomiting, and grade 4 hyponatremia. 7 patients were enrolled onto the phase II component at the MTD: sorafenib 200 mg BID days 1–28 and bevacizumab 2.5 mg/kg every other week. 57% (4/7) had grade 3 AEs at least possibly related to treatment,. No responses were observed in the phase II portion. Estimated median time to progression and survival were 8.6 months (95% CI: 0.4–16.3) and 13.3 months (95% CI 4.4 – not estimable), respectively. Conclusions The MTD of the combination is sorafenib 200 mg twice daily on days 1–28 plus bevacizumab 2.5 mg/kg on days 1 and 15 of a 28-day cycle. In the phase II portion of the trial, concerns regarding excessive toxicity, low efficacy, and slow enrollment led to discontinuation of the trial. (Clinical Trials ID: NCT00867321.) PMID:27943153
Implications of crater distributions on Venus
NASA Technical Reports Server (NTRS)
Kaula, W. M.
1993-01-01
The horizontal locations of craters on Venus are consistent with randomness. However, (1) randomness does not make crater counts useless for age indications; (2) consistency does not imply necessity or optimality; and (3) horizontal location is not the only reference frame against which to test models. Re (1), the apparent smallness of resurfacing areas means that a region on the order of one percent of the planet with a typical number of craters, 5-15, will have a range of feature ages of several 100 My. Re (2), models of resurfacing somewhat similar to Earth's can be found that are also consistent and more optimal than random: i.e., resurfacing occurring in clusters, that arise and die away in lime intervals on the order of 50 My. These agree with the observation that there are more areas of high crater density, and fewer of moderate density, than optimal for random. Re (3), 799 crater elevations were tested; there are more at low elevations and fewer at high elevations than optimal for random: i.e., 54.6 percent below the median. Only one of 40 random sets of 799 was as extreme.
NASA Astrophysics Data System (ADS)
Sandrik, Suzannah
Optimal solutions to the impulsive circular phasing problem, a special class of orbital maneuver in which impulsive thrusts shift a vehicle's orbital position by a specified angle, are found using primer vector theory. The complexities of optimal circular phasing are identified and illustrated using specifically designed Matlab software tools. Information from these new visualizations is applied to explain discrepancies in locally optimal solutions found by previous researchers. Two non-phasing circle-to-circle impulsive rendezvous problems are also examined to show the applicability of the tools developed here to a broader class of problems and to show how optimizing these rendezvous problems differs from the circular phasing case.
Deininger, Michael W.; Kopecky, Kenneth J.; Radich, Jerald P.; Kamel-Reid, Suzanne; Stock, Wendy; Paietta, Elisabeth; Emanuel, Peter D.; Tallman, Martin; Wadleigh, Martha; Larson, Richard A.; Lipton, Jeffrey H.; Slovak, Marilyn L.; Appelbaum, Frederick R.; Druker, Brian J.
2014-01-01
The standard dose of imatinib for newly diagnosed patients with chronic phase chronic myeloid leukemia (CP-CML) is 400mg daily (IM400), but the optimal dose is unknown. This randomized phase II study compared the rates of molecular, haematologic and cytogenetic response to IM400 vs. imatinib 400mg twice daily (IM800) in 153 adult patients with CP-CML. Dose adjustments for toxicity were flexible to maximize retention on study. Molecular response (MR) at 12 months was deeper in the IM800 arm (4-log reduction of BCR-ABL1 mRNA: 25% vs. 10% of patients, P=0.038; 3-log reduction: 53% vs. 35%, P=0.049). During the first 12 months BCR-ABL1 levels in the IM800 arm were an average 2.9-fold lower than in the IM400 arm (P=0.010). Complete haematologic response was similar, but complete cytogenetic response was higher with IM800 (85% vs. 67%, P=0.040). Grade 3–4 toxicities were more common for IM800 (58% vs. 31%, P=0.0007), and were most commonly haematologic. Few patients have relapsed, progressed or died, but progression-free (P=0.048) and relapse-free (P=0.031) survival were superior for IM800. In newly diagnosed CP-CML patients, IM800 induced deeper molecular responses than IM400, with a trend for improved progression-free and overall survival, but was associated with more severe toxicity. PMID:24383843
Skorupski, K. A.; Uhl, J. M.; Szivek, A; Allstadt Frazier, S. D.; Rebhun, R. B.; Rodriguez, C. O.
2016-01-01
Despite numerous published studies describing adjuvant chemotherapy for canine appendicular osteosarcoma, there is no consensus as to the optimal chemotherapy protocol. The purpose of this study was to determine whether either of two protocols would be associated with longer disease-free interval (DFI) in dogs with appendicular osteosarcoma following amputation. Dogs with histologically confirmed appendicular osteosarcoma that were free of gross metastases and underwent amputation were eligible for enrollment. Dogs were randomized to receive either six doses of carboplatin or three doses each of carboplatin and doxorubicin on an alternating schedule. Fifty dogs were included. Dogs receiving carboplatin alone had a significantly longer DFI (425 versus 135 days) than dogs receiving alternating carboplatin and doxorubicin (P = 0.04). Toxicity was similar between groups. These results suggest that six doses of carboplatin may be associated superior DFI when compared to six total doses of carboplatin and doxorubicin. PMID:24118677
Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; Illuminati, Fabrizio
2011-01-15
We present a comprehensive analysis of the performance of different classes of Gaussian states in the estimation of Gaussian phase-insensitive dissipative channels. In particular, we investigate the optimal estimation of the damping constant and reservoir temperature. We show that, for two-mode squeezed vacuum probe states, the quantum-limited accuracy of both parameters can be achieved simultaneously. Moreover, we show that for both parameters two-mode squeezed vacuum states are more efficient than coherent, thermal, or single-mode squeezed states. This suggests that at high-energy regimes, two-mode squeezed vacuum states are optimal within the Gaussian setup. This optimality result indicates a stronger form ofmore » compatibility for the estimation of the two parameters. Indeed, not only the minimum variance can be achieved at fixed probe states, but also the optimal state is common to both parameters. Additionally, we explore numerically the performance of non-Gaussian states for particular parameter values to find that maximally entangled states within d-dimensional cutoff subspaces (d{<=}6) perform better than any randomly sampled states with similar energy. However, we also find that states with very similar performance and energy exist with much less entanglement than the maximally entangled ones.« less
Co-state initialization for the minimum-time low-thrust trajectory optimization
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Li, Nan I.; Kolmanovsky, Ilya
2017-05-01
This paper presents an approach for co-state initialization which is a critical step in solving minimum-time low-thrust trajectory optimization problems using indirect optimal control numerical methods. Indirect methods used in determining the optimal space trajectories typically result in two-point boundary-value problems and are solved by single- or multiple-shooting numerical methods. Accurate initialization of the co-state variables facilitates the numerical convergence of iterative boundary value problem solvers. In this paper, we propose a method which exploits the trajectory generated by the so-called pseudo-equinoctial and three-dimensional finite Fourier series shape-based methods to estimate the initial values of the co-states. The performance of the approach for two interplanetary rendezvous missions from Earth to Mars and from Earth to asteroid Dionysus is compared against three other approaches which, respectively, exploit random initialization of co-states, adjoint-control transformation and a standard genetic algorithm. The results indicate that by using our proposed approach the percent of the converged cases is higher for trajectories with higher number of revolutions while the computation time is lower. These features are advantageous for broad trajectory search in the preliminary phase of mission designs.
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
Quenched bond randomness: Superfluidity in porous media and the strong violation of universality
NASA Astrophysics Data System (ADS)
Falicov, Alexis; Berker, A. Nihat
1997-04-01
The effects of quenched bond randomness are most readily studied with superfluidity immersed in a porous medium. A lattice model for3He-4He mixtures and incomplete4He fillings in aerogel yields the signature effect of bond randomness, namely the conversion of symmetry-breaking first-order phase transitions into second-order phase transitions, the λ-line reaching zero temperature, and the elimination of non-symmetry-breaking first-order phase transitions. The model recognizes the importance of the connected nature of aerogel randomness and thereby yields superfluidity at very low4He concentrations, a phase separation entirely within the superfluid phase, and the order-parameter contrast between mixtures and incomplete fillings, all in agreement with experiments. The special properties of the helium mixture/aerogel system are distinctly linked to the aerogel properties of connectivity, randomness, and tenuousness, via the additional study of a regularized “jungle-gym” aerogel. Renormalization-group calculations indicate that a strong violation of the empirical universality principle of critical phenomena occurs under quenched bond randomness. It is argued that helium/aerogel critical properties reflect this violation and further experiments are suggested. Renormalization-group analysis also shows that, adjoiningly to the strong universality violation (which hinges on the occurrence or non-occurrence of asymptotic strong coupling—strong randomness under rescaling), there is a new “hyperuniversality” at phase transitions with asymptotic strong coupling—strong randomness behavior, for example assigning the same critical exponents to random- bond tricriticality and random- field criticality.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.
2018-05-01
The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
Complex scaling behavior in animal foraging patterns
NASA Astrophysics Data System (ADS)
Premachandra, Prabhavi Kaushalya
This dissertation attempts to answer questions from two different areas of biology, ecology and neuroscience, using physics-based techniques. In Section 2, suitability of three competing random walk models is tested to describe the emergent movement patterns of two species of primates. The truncated power law (power law with exponential cut off) is the most suitable random walk model that characterizes the emergent movement patterns of these primates. In Section 3, an agent-based model is used to simulate search behavior in different environments (landscapes) to investigate the impact of the resource landscape on the optimal foraging movement patterns of deterministic foragers. It should be noted that this model goes beyond previous work in that it includes parameters such as spatial memory and satiation, which have received little consideration to date in the field of movement ecology. When the food availability is scarce in a tropical forest-like environment with feeding trees distributed in a clumped fashion and the size of those trees are distributed according to a lognormal distribution, the optimal foraging pattern of a generalist who can consume various and abundant food types indeed reaches the Levy range, and hence, show evidence for Levy-flight-like (power law distribution with exponent between 1 and 3) behavior. Section 4 of the dissertation presents an investigation of phase transition behavior in a network of locally coupled self-sustained oscillators as the system passes through various bursting states. The results suggest that a phase transition does not occur for this locally coupled neuronal network. The data analysis in the dissertation adopts a model selection approach and relies on methods based on information theory and maximum likelihood.
Progressive Staging of Pilot Studies to Improve Phase III Trials for Motor Interventions
Dobkin, Bruce H.
2014-01-01
Based on the suboptimal research pathways that finally led to multicenter randomized clinical trials (MRCTs) of treadmill training with partial body weight support and of robotic assistive devices, strategically planned successive stages are proposed for pilot studies of novel rehabilitation interventions Stage 1, consideration-of-concept studies, drawn from animal experiments, theories, and observations, delineate the experimental intervention in a small convenience sample of participants, so the results must be interpreted with caution. Stage 2, development-of-concept pilots, should optimize the components of the intervention, settle on most appropriate outcome measures, and examine dose-response effects. A well-designed study that reveals no efficacy should be published to counterweight the confirmation bias of positive trials. Stage 3, demonstration-of-concept pilots, can build out from what has been learned to test at least 15 participants in each arm, using random assignment and blinded outcome measures. A control group should receive an active practice intervention aimed at the same primary outcome. A third arm could receive a substantially larger dose of the experimental therapy or a combinational intervention. If only 1 site performed this trial, a different investigative group should aim to reproduce positive outcomes based on the optimal dose of motor training. Stage 3 studies ought to suggest an effect size of 0.4 or higher, so that approximately 50 participants in each arm will be the number required to test for efficacy in a stage 4, proof-of-concept MRCT. By developing a consensus around acceptable and necessary practices for each stage, similar to CONSORT recommendations for the publication of phase III clinical trials, better quality pilot studies may move quickly into better designed and more successful MRCTs of experimental interventions. PMID:19240197
Kaneko, Masato; Tanigawa, Takahiko; Hashizume, Kensei; Kajikawa, Mariko; Tajiri, Masahiro; Mueck, Wolfgang
2013-01-01
This study was designed to confirm the appropriateness of the dose setting for a Japanese phase III study of rivaroxaban in patients with non-valvular atrial fibrillation (NVAF), which had been based on model simulation employing phase II study data. The previously developed mixed-effects pharmacokinetic/pharmacodynamic (PK-PD) model, which consisted of an oral one-compartment model parameterized in terms of clearance, volume and a first-order absorption rate, was rebuilt and optimized using the data for 597 subjects from the Japanese phase III study, J-ROCKET AF. A mixed-effects modeling technique in NONMEM was used to quantify both unexplained inter-individual variability and inter-occasion variability, which are random effect parameters. The final PK and PK-PD models were evaluated to identify influential covariates. The empirical Bayes estimates of AUC and C(max) from the final PK model were consistent with the simulated results from the Japanese phase II study. There was no clear relationship between individual estimated exposures and safety-related events, and the estimated exposure levels were consistent with the global phase III data. Therefore, it was concluded that the dose selected for the phase III study with Japanese NVAF patients by means of model simulation employing phase II study data had been appropriate from the PK-PD perspective.
NASA Astrophysics Data System (ADS)
Zhang, Xicheng; Fang, Longjie; Zuo, Haoyi; Du, Jinglei; Gao, Fuhua; Pang, Lin
2018-07-01
It is studied in detail that whether the optimized phase distributions obtained from different approaches have relations in focusing light through turbid media. A view is proposed that there exists a strong correlation among the optimized phase distributions from different approaches. The numeric simulations and experiments indicate that the larger the number of segments is, the greater the correlation coefficient of optimized phase distributions from different approaches will be. This study might give an important insight into the essence of focusing light through turbid media by phase modulation.
This report summarizes Phase II (site optimization) of the Nationwide Fund-lead Pump and Treat Optimization Project. This phase included conducting Remediation System Evaluations (RSEs) at each of the 20 sites selected in Phase I.
Acceptance of internet-based hearing healthcare among adults who fail a hearing screening.
Rothpletz, Ann M; Moore, Ashley N; Preminger, Jill E
2016-09-01
This study measured help-seeking readiness and acceptance of existing internet-based hearing healthcare (IHHC) websites among a group of older adults who failed a hearing screening (Phase 1). It also explored the effects of brief training on participants' acceptance of IHHC (Phase 2). Twenty-seven adults (age 55+) who failed a hearing screening participated. During Phase 1 participants were administered the University of Rhode Island Change Assessment (URICA) and patient technology acceptance model (PTAM) Questionnaire. During Phase 2 participants were randomly assigned to a training or control group. Training group participants attended an instructional class on existing IHHC websites. The control group received no training. The PTAM questionnaire was re-administered to both groups 4-6 weeks following the initial assessment. The majority of participants were either considering or preparing to do something about their hearing loss, and were generally accepting of IHHC websites (Phase 1). The participants who underwent brief IHHC training reported increases in hearing healthcare knowledge and slight improvements in computer self-efficacy (Phase 2). Older adults who fail hearing screenings may be good candidates for IHHC. The incorporation of a simple user-interface and short-term training may optimize the usability of future IHHC programs for this population.
Pattern formations and optimal packing.
Mityushev, Vladimir
2016-04-01
Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.
Contextual Interference in Complex Bimanual Skill Learning Leads to Better Skill Persistence
Pauwels, Lisa; Swinnen, Stephan P.; Beets, Iseult A. M.
2014-01-01
The contextual interference (CI) effect is a robust phenomenon in the (motor) skill learning literature. However, CI has yielded mixed results in complex task learning. The current study addressed whether the CI effect is generalizable to bimanual skill learning, with a focus on the temporal evolution of memory processes. In contrast to previous studies, an extensive training schedule, distributed across multiple days of practice, was provided. Participants practiced three frequency ratios across three practice days following either a blocked or random practice schedule. During the acquisition phase, better overall performance for the blocked practice group was observed, but this difference diminished as practice progressed. At immediate and delayed retention, the random practice group outperformed the blocked practice group, except for the most difficult frequency ratio. Our main finding is that the random practice group showed superior performance persistence over a one week time interval in all three frequency ratios compared to the blocked practice group. This study contributes to our understanding of learning, consolidation and memory of complex motor skills, which helps optimizing training protocols in future studies and rehabilitation settings. PMID:24960171
GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.
Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A
2016-01-01
In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
Fu, Juanjuan; Ding, Hong; Yang, Haimiao; Huang, Yuhong
2017-01-01
Background Common cold is one of the most frequently occurring illnesses in primary healthcare services and represents considerable disease burden. Common cold of Qi-deficiency syndrome (CCQDS) is an important but less addressed traditional Chinese medicine (TCM) pattern. We designed a protocol to explore the efficacy, safety, and optimal dose of Shen Guo Lao Nian Granule (SGLNG) for treating CCQDS. Methods/Design This is a multicenter, randomized, double-blind, placebo-controlled, phase II clinical trial. A total of 240 eligible patients will be recruited from five centers. Patients are randomly assigned to high-dose group, middle-dose group, low-dose group, or control group in a 1 : 1 : 1 : 1 ratio. All drugs are required to be taken 3 times daily for 5 days with a 5-day follow-up period. Primary outcomes are duration of all symptoms, total score reduction on Jackson's scale, and TCM symptoms scale. Secondary outcomes include every single TCM symptom duration and score reduction, TCM main symptoms disappearance rate, curative effects, and comparison between Jackson's scale and TCM symptom scale. Ethics and Trial Registration This study protocol was approved by the Ethics Committee of Clinical Trials and Biomedicine of West China Hospital of Sichuan University (number IRB-2014-12) and registered with the Chinese Clinical Trial Registry (ChiCTR-IPR-15006349). PMID:29430253
A phase 2 randomized trial of ELND005, scyllo-inositol, in mild to moderate Alzheimer disease
Sperling, R.; Keren, R.; Porsteinsson, A.P.; van Dyck, C.H.; Tariot, P.N.; Gilman, S.; Arnold, D.; Abushakra, S.; Hernandez, C.; Crans, G.; Liang, E.; Quinn, G.; Bairu, M.; Pastrak, A.; Cedarbaum, J.M.
2011-01-01
Objective: This randomized, double-blind, placebo-controlled, dose-ranging phase 2 study explored safety, efficacy, and biomarker effects of ELND005 (an oral amyloid anti-aggregation agent) in mild to moderate Alzheimer disease (AD). Methods: A total of 353 patients were randomized to ELND005 (250, 1,000, or 2,000 mg) or placebo twice daily for 78 weeks. Coprimary endpoints were the Neuropsychological Test Battery (NTB) and Alzheimer's Disease Cooperative Study–Activities of Daily Living (ADCS-ADL) scale. The primary analysis compared 250 mg (n =84) to placebo (n =82) after an imbalance of infections and deaths led to early discontinuation of the 2 higher dose groups. Results: The 250 mg dose demonstrated acceptable safety. The primary efficacy analysis at 78 weeks revealed no significant differences between the treatment groups on the NTB or ADCS-ADL. Brain ventricular volume showed a small but significant increase in the overall 250 mg group (p =0.049). At the 250 mg dose, scyllo-inositol concentrations increased in CSF and brain and CSF Aβx-42 was decreased significantly compared to placebo (p =0.009). Conclusions: Primary clinical efficacy outcomes were not significant. The safety and CSF biomarker results will guide selection of the optimal dose for future studies, which will target earlier stages of AD. Classification of evidence: Due to the small sample sizes, this Class II trial provides insufficient evidence to support or refute a benefit of ELND005. PMID:21917766
Optimal partitioning of random programs across two processors
NASA Technical Reports Server (NTRS)
Nicol, D. M.
1986-01-01
The optimal partitioning of random distributed programs is discussed. It is concluded that the optimal partitioning of a homogeneous random program over a homogeneous distributed system either assigns all modules to a single processor, or distributes the modules as evenly as possible among all processors. The analysis rests heavily on the approximation which equates the expected maximum of a set of independent random variables with the set's maximum expectation. The results are strengthened by providing an approximation-free proof of this result for two processors under general conditions on the module execution time distribution. It is also shown that use of this approximation causes two of the previous central results to be false.
Boson expansions based on the random phase approximation representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedrocchi, V.G.; Tamura, T.
1984-04-01
A new boson expansion theory based on the random phase approximation is presented. The boson expansions are derived here directly in the random phase approximation representation with the help of a technique that combines the use of the Usui operator with that of a new bosonization procedure, called the term-by-term bosonization method. The present boson expansion theory is constructed by retaining a single collective quadrupole random phase approximation component, a truncation that allows for a perturbative treatment of the whole problem. Both Hermitian, as well as non-Hermitian boson expansions, valid for even nuclei, are obtained.
Parke, Tom; Marchenko, Olga; Anisimov, Vladimir; Ivanova, Anastasia; Jennison, Christopher; Perevozskaya, Inna; Song, Guochen
2017-01-01
Designing an oncology clinical program is more challenging than designing a single study. The standard approaches have been proven to be not very successful during the last decade; the failure rate of Phase 2 and Phase 3 trials in oncology remains high. Improving a development strategy by applying innovative statistical methods is one of the major objectives of a drug development process. The oncology sub-team on Adaptive Program under the Drug Information Association Adaptive Design Scientific Working Group (DIA ADSWG) evaluated hypothetical oncology programs with two competing treatments and published the work in the Therapeutic Innovation and Regulatory Science journal in January 2014. Five oncology development programs based on different Phase 2 designs, including adaptive designs and a standard two parallel arm Phase 3 design were simulated and compared in terms of the probability of clinical program success and expected net present value (eNPV). In this article, we consider eight Phase2/Phase3 development programs based on selected combinations of five Phase 2 study designs and three Phase 3 study designs. We again used the probability of program success and eNPV to compare simulated programs. For the development strategies, we considered that the eNPV showed robust improvement for each successive strategy, with the highest being for a three-arm response adaptive randomization design in Phase 2 and a group sequential design with 5 analyses in Phase 3.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
A novel microseeding method for the crystallization of membrane proteins in lipidic cubic phase.
Kolek, Stefan Andrew; Bräuning, Bastian; Stewart, Patrick Douglas Shaw
2016-04-01
Random microseed matrix screening (rMMS), in which seed crystals are added to random crystallization screens, is an important breakthrough in soluble protein crystallization that increases the number of crystallization hits that are available for optimization. This greatly increases the number of soluble protein structures generated every year by typical structural biology laboratories. Inspired by this success, rMMS has been adapted to the crystallization of membrane proteins, making LCP seed stock by scaling up LCP crystallization conditions without changing the physical and chemical parameters that are critical for crystallization. Seed crystals are grown directly in LCP and, as with conventional rMMS, a seeding experiment is combined with an additive experiment. The new method was used with the bacterial integral membrane protein OmpF, and it was found that it increased the number of crystallization hits by almost an order of magnitude: without microseeding one new hit was found, whereas with LCP-rMMS eight new hits were found. It is anticipated that this new method will lead to better diffracting crystals of membrane proteins. A method of generating seed gradients, which allows the LCP seed stock to be diluted and the number of crystals in each LCP bolus to be reduced, if required for optimization, is also demonstrated.
NASA Astrophysics Data System (ADS)
Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi
2015-01-01
We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.
Pacheco, Shaun; Brand, Jonathan F.; Zaverton, Melissa; Milster, Tom; Liang, Rongguang
2015-01-01
A method to design one-dimensional beam-spitting phase gratings with low sensitivity to fabrication errors is described. The method optimizes the phase function of a grating by minimizing the integrated variance of the energy of each output beam over a range of fabrication errors. Numerical results for three 1x9 beam splitting phase gratings are given. Two optimized gratings with low sensitivity to fabrication errors were compared with a grating designed for optimal efficiency. These three gratings were fabricated using gray-scale photolithography. The standard deviation of the 9 outgoing beam energies in the optimized gratings were 2.3 and 3.4 times lower than the optimal efficiency grating. PMID:25969268
Ultra-fast quantum randomness generation by accelerated phase diffusion in a pulsed laser diode.
Abellán, C; Amaya, W; Jofre, M; Curty, M; Acín, A; Capmany, J; Pruneri, V; Mitchell, M W
2014-01-27
We demonstrate a high bit-rate quantum random number generator by interferometric detection of phase diffusion in a gain-switched DFB laser diode. Gain switching at few-GHz frequencies produces a train of bright pulses with nearly equal amplitudes and random phases. An unbalanced Mach-Zehnder interferometer is used to interfere subsequent pulses and thereby generate strong random-amplitude pulses, which are detected and digitized to produce a high-rate random bit string. Using established models of semiconductor laser field dynamics, we predict a regime of high visibility interference and nearly complete vacuum-fluctuation-induced phase diffusion between pulses. These are confirmed by measurement of pulse power statistics at the output of the interferometer. Using a 5.825 GHz excitation rate and 14-bit digitization, we observe 43 Gbps quantum randomness generation.
Effect of Phase-Breaking Events on Electron Transport in Mesoscopic and Nanodevices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meunier, Vincent; Mintmire, John W; Thushari, Jayasekera
2008-01-01
Existing ballistic models for electron transport in mesoscopic and nanoscale systems break down as the size of the device becomes longer than the phase coherence length of electrons in the system. Krstic et al. experimentally observed that the current in single-wall carbon nanotube systems can be regarded as a combination of a coherent part and a noncoherent part. In this article, we discuss the use of Buettiker phase-breaking technique to address partially coherent electron transport, generalize that to a multichannel problem, and then study the effect of phase-breaking events on the electron transport in two-terminal graphene nanoribbon devices. We alsomore » investigate the difference between the pure-phase randomization and phase/momentum randomization boundary conditions. While momentum randomization adds an extra resistance caused by backward scattering, pure-phase randomization smooths the conductance oscillations because of interference.« less
Quenched bond randomness: Superfluidity in porous media and the strong violation of universality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falicov, A.; Berker, A.N.
1997-04-01
The effects of quenched bond randomness are most readily studied with superfluidity immersed in a porous medium. A lattice model for {sup 3}He-{sup 4}He mixtures and incomplete {sup 4}He fillings in aerogel yields the signature effect of bond randomness, namely the conversion of symmetry-breaking first-order phase transitions into second-order phase transitions, the A-line reaching zero temperature, and the elimination of non-symmetry-breaking first-order phase transitions. The model recognizes the importance of the connected nature of aerogel randomness and thereby yields superfluidity at very low {sup 4}He concentrations, a phase separation entirely within the superfluid phase, and the order-parameter contrast between mixturesmore » and incomplete fillings, all in agreement with experiments. The special properties of the helium mixture/aerogel system are distinctly linked to the aerogel properties of connectivity, randomness, and tenuousness, via the additional study of a regularized {open_quote}jungle-gym{close_quotes} aerogel. Renormalization-group calculations indicate that a strong violation of the empirical universality principle of critical phenomena occurs under quenched bond randomness. It is argued that helium/aerogel critical properties reflect this violation and further experiments are suggested. Renormalization-group analysis also shows that, adjoiningly to the strong universality violation (which hinges on the occurrence or non-occurrence of asymptotic strong coupling-strong randomness under resealing), there is a new {open_quotes}hyperuniversality{close_quotes} at phase transitions with asymptotic strong coupling-strong randomness behavior, for example assigning the same critical exponents to random-bond tricriticality and random-field criticality.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wei, E-mail: Liu.Wei@mayo.edu; Schild, Steven E.; Chang, Joe Y.
Purpose: The purpose of this study was to compare the impact of uncertainties and interplay on 3-dimensional (3D) and 4D robustly optimized intensity modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods and Materials: IMPT plans were created for 11 nonrandomly selected non-small cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D computed tomography (CT) to irradiate clinical target volume (CTV). Regular fractionation (66 Gy [relative biological effectiveness; RBE] in 33 fractions) was considered.more » In 4D optimization, the CTV of individual phases received nonuniform doses to achieve a uniform cumulative dose. The root-mean-square dose-volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram (DVH) indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed rank test. Results: 4D robust optimization plans led to smaller AUC for CTV (14.26 vs 18.61, respectively; P=.001), better CTV coverage (Gy [RBE]) (D{sub 95%} CTV: 60.6 vs 55.2, respectively; P=.001), and better CTV homogeneity (D{sub 5%}-D{sub 95%} CTV: 10.3 vs 17.7, resspectively; P=.002) in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage (D{sub 95%} CTV: 64.5 vs 63.8, respectively; P=.0068), comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions: Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions.« less
Generalized gradient algorithm for trajectory optimization
NASA Technical Reports Server (NTRS)
Zhao, Yiyuan; Bryson, A. E.; Slattery, R.
1990-01-01
The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
Encrypted holographic data storage based on orthogonal-phase-code multiplexing.
Heanue, J F; Bashaw, M C; Hesselink, L
1995-09-10
We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.
Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert
2009-03-10
In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.
Multiphase contrast medium injection for optimization of computed tomographic coronary angiography.
Budoff, Matthew Jay; Shinbane, Jerold S; Child, Janis; Carson, Sivi; Chau, Alex; Liu, Stephen H; Mao, SongShou
2006-02-01
Electron beam angiography is a minimally invasive imaging technique. Adequate vascular opacification throughout the study remains a critical issue for image quality. We hypothesized that vascular image opacification and uniformity of vascular enhancement between slices can be improved using multiphase contrast medium injection protocols. We enrolled 244 consecutive patients who were randomized to three different injection protocols: single-phase contrast medium injection (Group 1), dual-phase contrast medium injection with each phase at a different injection rate (Group 2), and a three-phase injection with two phases of contrast medium injection followed by a saline injection phase (Group 3). Parameters measured were aortic opacification based on Hounsfield units and uniformity of aortic enhancement at predetermined slices (locations from top [level 1] to base [level 60]). In Group 1, contrast opacification differed across seven predetermined locations (scan levels: 1st versus 60th, P < .05), demonstrating significant nonuniformity. In Group 2, there was more uniform vascular enhancement, with no significant differences between the first 50 slices (P > .05). In Group 3, there was greater uniformity of vascular enhancement and higher mean Hounsfield units value across all 60 images, from the aortic root to the base of the heart (P < .05). The three-phase injection protocol improved vascular opacification at the base of the heart, as well as uniformity of arterial enhancement throughout the study.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
Random sequences generation through optical measurements by phase-shifting interferometry
NASA Astrophysics Data System (ADS)
François, M.; Grosges, T.; Barchiesi, D.; Erra, R.; Cornet, A.
2012-04-01
The development of new techniques for producing random sequences with a high level of security is a challenging topic of research in modern cryptographics. The proposed method is based on the measurement by phase-shifting interferometry of the speckle signals of the interaction between light and structures. We show how the combination of amplitude and phase distributions (maps) under a numerical process can produce random sequences. The produced sequences satisfy all the statistical requirements of randomness and can be used in cryptographic schemes.
2012-10-10
IrwIn D. OlIn Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Sotera Defense Solutions, Inc...2012 Formal Report Flat-Top Sector Beams Using Only Array Element Phase Weighting: A Metaheuristic Optimization Approach Irwin D. Olin* Naval...Manuscript approved June 30, 2012. 1 FLAT-TOP SECTOR BEAMS USING ONLY ARRAY ELEMENT PHASE WEIGHTING: A METAHEURISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Donghai; Deng, Yongkai; Chu, Saisai
2016-07-11
Single-nanoparticle two-photon microscopy shows great application potential in super-resolution cell imaging. Here, we report in situ adaptive optimization of single-nanoparticle two-photon luminescence signals by phase and polarization modulations of broadband laser pulses. For polarization-independent quantum dots, phase-only optimization was carried out to compensate the phase dispersion at the focus of the objective. Enhancement of the two-photon excitation fluorescence intensity under dispersion-compensated femtosecond pulses was achieved. For polarization-dependent single gold nanorod, in situ polarization optimization resulted in further enhancement of two-photon photoluminescence intensity than phase-only optimization. The application of in situ adaptive control of femtosecond pulse provides a way for object-orientedmore » optimization of single-nanoparticle two-photon microscopy for its future applications.« less
NASA Astrophysics Data System (ADS)
Cisty, Milan; Bajtek, Zbynek; Celar, Lubomir; Soldanova, Veronika
2017-04-01
Finding effective ways to build irrigation systems which meet irrigation demands and also achieve positive environmental and economic outcomes requires, among other activities, the development of new modelling tools. Due to the high costs associated with the necessary material and the installation of an irrigation water distribution system (WDS), it is essential to optimize the design of the WDS, while the hydraulic requirements (e.g., the required pressure on irrigation machines) of the network are gratified. In this work an optimal design of a water distribution network is proposed for large irrigation networks. In the present work, a multi-step optimization approach is proposed in such a way that the optimization is accomplished in two phases. In the first phase suboptimal solutions are searched for; in the second phase, the optimization problem is solved with a reduced search space based on these solutions, which significantly supports the finding of an optimal solution. The first phase of the optimization consists of several runs of the NSGA-II, which is applied in this phase by varying its parameters for every run, i.e., changing the population size, the number of generations, and the crossover and mutation parameters. This is done with the aim of obtaining different sub-optimal solutions which have a relatively low cost. These sub-optimal solutions are subsequently used in the second phase of the proposed methodology, in which the final optimization run is built on sub-optimal solutions from the previous phase. The purpose of the second phase is to improve the results of the first phase by searching through the reduced search space. The reduction is based on the minimum and maximum diameters for each pipe from all the networks from the first stage. In this phase, NSGA-II do not consider diameters which are outside of this range. After the NSGA-II second phase computations, the best result published so far for the Balerma benchmark network which was used for methodology testing was achieved in the presented work. The knowledge gained from these computational experiments lies not in offering a new advanced heuristic or hybrid optimization methods of a water distribution network, but in the fact that it is possible to obtain very good results with simple, known methods if they are properly used methodologically. ACKNOWLEDGEMENT This work was supported by the Slovak Research and Development Agency under Contract No. APVV-15-0489 and by the Scientific Grant Agency of the Ministry of Education of the Slovak Republic and the Slovak Academy of Sciences, Grant No. 1/0665/15.
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
NASA Astrophysics Data System (ADS)
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
McNally, Dayre; Amrein, Karin; O'Hearn, Katharine; Fergusson, Dean; Geier, Pavel; Henderson, Matt; Khamessan, Ali; Lawson, Margaret L; McIntyre, Lauralyn; Redpath, Stephanie; Weiler, Hope A; Menon, Kusum
2017-01-01
Clinical research has recently demonstrated that vitamin D deficiency (VDD) is highly prevalent in the pediatric intensive care unit (PICU) and associated with worse clinical course. Multiple adult ICU trials have suggested that optimization of vitamin D status through high-dose supplementation may reduce mortality and improve other clinically relevant outcomes; however, there have been no trials of rapid normalization in the PICU setting. The objective of this study is to evaluate the safety and efficacy of an enteral weight-based cholecalciferol loading dose regimen in critically ill children with VDD. The VITdAL-PICU pilot study is designed as a multicenter placebo-controlled phase II dose evaluation pilot randomized controlled trial. We aim to randomize 67 VDD critically ill children using a 2:1 randomization schema to receive loading dose enteral cholecalciferol (10,000 IU/kg, maximum of 400,000 IU) or a placebo solution. Participants, caregivers and outcome assessors will be blinded to allocation. Eligibility criteria include ICU patient, aged 37 weeks to 18 years, expected ICU length of stay more than 48 h, anticipated access to bloodwork at 7 days, and VDD (blood total 25 hydroxyvitamin D < 50 nmol/L). The primary objective is to determine whether the dosing protocol normalizes vitamin D status, defined as a blood total 25(OH)D concentration above 75 nmol/L. Secondary objectives include an examination of the safety of the dosing regimen (e.g. hypercalcemia, hypercalciuria, nephrocalcinosis), measures of vitamin D axis function (e.g. calcitriol levels, immune function), and protocol feasibility (eligibility criteria, protocol deviations, blinding). Despite significant observational literature suggesting VDD to be a modifiable risk factor in the PICU setting, there is no robust clinical trial evidence evaluating the benefits of rapid normalization. This phase II clinical trial will evaluate an innovative weight-based dosing regimen intended to rapidly and safely normalize vitamin D levels in critically ill children. Study findings will be used to inform the design of a multicenter phase III trial evaluating the clinical and economic benefits to rapid normalization. Recruitment for this trial was initiated in January 2016 and is expected to continue until November 30, 2017. Clinicaltrials.gov NCT02452762.
Wilkin, Timothy J.; Su, Zhaohui; Krambrink, Amy; Long, Jianmin; Greaves, Wayne; Gross, Robert; Hughes, Michael D.; Flexner, Charles; Skolnik, Paul R.; Coakley, Eoin; Godfrey, Catherine; Hirsch, Martin; Kuritzkes, Daniel R.; Gulick, Roy M.
2010-01-01
Background Vicriviroc, an investigational CCR5 antagonist, demonstrated short-term safety and antiretroviral activity. Methods Phase 2, double-blind, randomized study of vicriviroc in treatment-experienced subjects with CCR5-using HIV-1. Vicriviroc (5, 10 or 15 mg) or placebo was added to a failing regimen with optimization of background antiretroviral medications at day 14. Subjects experiencing virologic failure and subjects completing 48 weeks were offered open-label vicriviroc. Results 118 subjects were randomized. Virologic failure (<1 log10 decline in HIV-1 RNA ≥16 weeks post-randomization) occurred by week 48 in 24/28 (86%), 12/30 (40%), 8/30 (27%), 10/30 (33%) of subjects randomized to placebo, 5, 10 and 15 mg respectively. Overall, 113 subjects received vicriviroc at randomization or after virologic failure, and 52 (46%) achieved HIV-1 RNA <50 copies/mL within 24 weeks. Through 3 years, 49% of those achieving suppression did not experience confirmed viral rebound. Dual or mixed-tropic HIV-1 was detected in 33 (29%). Vicriviroc resistance (progressive decrease in maximal percentage inhibition on phenotypic testing) was detected in 6 subjects. Nine subjects discontinued vicriviroc due to adverse events. Conclusions Vicriviroc appears safe and demonstrates sustained virologic suppression through 3 years of follow-up. Further trials of vicriviroc will establish its clinical utility for the treatment of HIV-1 infection. PMID:20672447
Silva, Camila F A; de Vasconcelos, Simone G; da Silva, Thales A; Silva, Flávia M
2018-01-26
The aim of this study was to systematically review the effect of permissive underfeeding/trophic feeding on the clinical outcomes of critically ill patients. A systematic review of randomized clinical trials to evaluate the mortality, length of stay, and mechanical ventilation duration in patients randomized to either hypocaloric or full-energy enteral nutrition was performed. Data sources included PubMed and Scopus and the reference lists of the articles retrieved. Two independent reviewers participated in all phases of this systematic review as proposed by the Cochrane Handbook, and the review was reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A total of 7 randomized clinical trials that included a total of 1,717 patients were reviewed. Intensive care unit length of stay and mechanical ventilation duration were not statistically different between the intervention and control groups in all randomized clinical trials, and mortality rate was also not different between the groups. In conclusion, hypocaloric enteral nutrition had no significantly different effects on morbidity and mortality in critically ill patients when compared with full-energy nutrition. It is still necessary to determine the safety of this intervention in this group of patients, the optimal amount of energy provided, and the duration of this therapy. © 2018 American Society for Parenteral and Enteral Nutrition.
Theoretical Foundations of Wireless Networks
2015-07-22
Optimal transmission over a fading channel with imperfect channel state information,” in Global Telecommun. Conf., pp. 1–5, Houston TX , December 5-9...SECURITY CLASSIFICATION OF: The goal of this project is to develop a formal theory of wireless networks providing a scientific basis to understand...randomness and optimality. Randomness, in the form of fading, is a defining characteristic of wireless networks. Optimality is a suitable design
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Active control of the spatial MRI phase distribution with optimal control theory
NASA Astrophysics Data System (ADS)
Lefebvre, Pauline M.; Van Reeth, Eric; Ratiney, Hélène; Beuf, Olivier; Brusseau, Elisabeth; Lambert, Simon A.; Glaser, Steffen J.; Sugny, Dominique; Grenier, Denis; Tse Ve Koon, Kevin
2017-08-01
This paper investigates the use of Optimal Control (OC) theory to design Radio-Frequency (RF) pulses that actively control the spatial distribution of the MRI magnetization phase. The RF pulses are generated through the application of the Pontryagin Maximum Principle and optimized so that the resulting transverse magnetization reproduces various non-trivial and spatial phase patterns. Two different phase patterns are defined and the resulting optimal pulses are tested both numerically with the ODIN MRI simulator and experimentally with an agar gel phantom on a 4.7 T small-animal MR scanner. Phase images obtained in simulations and experiments are both consistent with the defined phase patterns. A practical application of phase control with OC-designed pulses is also presented, with the generation of RF pulses adapted for a Magnetic Resonance Elastography experiment. This study demonstrates the possibility to use OC-designed RF pulses to encode information in the magnetization phase and could have applications in MRI sequences using phase images.
Deininger, Michael W; Kopecky, Kenneth J; Radich, Jerald P; Kamel-Reid, Suzanne; Stock, Wendy; Paietta, Elisabeth; Emanuel, Peter D; Tallman, Martin; Wadleigh, Martha; Larson, Richard A; Lipton, Jeffrey H; Slovak, Marilyn L; Appelbaum, Frederick R; Druker, Brian J
2014-01-01
The standard dose of imatinib for newly diagnosed patients with chronic phase chronic myeloid leukaemia (CP-CML) is 400 mg daily (IM400), but the optimal dose is unknown. This randomized phase II study compared the rates of molecular, haematological and cytogenetic response to IM400 vs. imatinib 400 mg twice daily (IM800) in 153 adult patients with CP-CML. Dose adjustments for toxicity were flexible to maximize retention on study. Molecular response (MR) at 12 months was deeper in the IM800 arm (4-log reduction of BCR-ABL1 mRNA: 25% vs. 10% of patients, P = 0·038; 3-log reduction: 53% vs. 35%, P = 0·049). During the first 12 months BCR-ABL1 levels in the IM800 arm were an average 2·9-fold lower than in the IM400 arm (P = 0·010). Complete haematological response was similar, but complete cytogenetic response was higher with IM800 (85% vs. 67%, P = 0·040). Grade 3-4 toxicities were more common for IM800 (58% vs. 31%, P = 0·0007), and were most commonly haematological. Few patients have relapsed, progressed or died, but both progression-free (P = 0·048) and relapse-free (P = 0·031) survival were superior for IM800. In newly diagnosed CP-CML patients, IM800 induced deeper MRs than IM400, with a trend for improved progression-free and overall survival, but was associated with more severe toxicity. © 2013 John Wiley & Sons Ltd.
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
Micro-Randomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions
Klasnja, Predrag; Hekler, Eric B.; Shiffman, Saul; Boruvka, Audrey; Almirall, Daniel; Tewari, Ambuj; Murphy, Susan A.
2015-01-01
Objective This paper presents an experimental design, the micro-randomized trial, developed to support optimization of just-in-time adaptive interventions (JITAIs). JITAIs are mHealth technologies that aim to deliver the right intervention components at the right times and locations to optimally support individuals’ health behaviors. Micro-randomized trials offer a way to optimize such interventions by enabling modeling of causal effects and time-varying effect moderation for individual intervention components within a JITAI. Methods The paper describes the micro-randomized trial design, enumerates research questions that this experimental design can help answer, and provides an overview of the data analyses that can be used to assess the causal effects of studied intervention components and investigate time-varying moderation of those effects. Results Micro-randomized trials enable causal modeling of proximal effects of the randomized intervention components and assessment of time-varying moderation of those effects. Conclusions Micro-randomized trials can help researchers understand whether their interventions are having intended effects, when and for whom they are effective, and what factors moderate the interventions’ effects, enabling creation of more effective JITAIs. PMID:26651463
A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks.
Zhang, Qingguo; Fok, Mable P
2017-01-09
Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate's target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate's target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage-distance rate and the number of moved mobile sensors, when compare with other approaches.
A Two-Phase Coverage-Enhancing Algorithm for Hybrid Wireless Sensor Networks
Zhang, Qingguo; Fok, Mable P.
2017-01-01
Providing field coverage is a key task in many sensor network applications. In certain scenarios, the sensor field may have coverage holes due to random initial deployment of sensors; thus, the desired level of coverage cannot be achieved. A hybrid wireless sensor network is a cost-effective solution to this problem, which is achieved by repositioning a portion of the mobile sensors in the network to meet the network coverage requirement. This paper investigates how to redeploy mobile sensor nodes to improve network coverage in hybrid wireless sensor networks. We propose a two-phase coverage-enhancing algorithm for hybrid wireless sensor networks. In phase one, we use a differential evolution algorithm to compute the candidate’s target positions in the mobile sensor nodes that could potentially improve coverage. In the second phase, we use an optimization scheme on the candidate’s target positions calculated from phase one to reduce the accumulated potential moving distance of mobile sensors, such that the exact mobile sensor nodes that need to be moved as well as their final target positions can be determined. Experimental results show that the proposed algorithm provided significant improvement in terms of area coverage rate, average moving distance, area coverage–distance rate and the number of moved mobile sensors, when compare with other approaches. PMID:28075365
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas Paul, V.; Saroja, S.; Albert, S.K.
This paper presents a detailed electron microscopy study on the microstructure of various regions of weldment fabricated by three welding methods namely tungsten inert gas welding, electron beam welding and laser beam welding in an indigenously developed 9Cr reduced activation ferritic/martensitic steel. Electron back scatter diffraction studies showed a random micro-texture in all the three welds. Microstructural changes during thermal exposures were studied and corroborated with hardness and optimized conditions for the post weld heat treatment have been identified for this steel. Hollomon–Jaffe parameter has been used to estimate the extent of tempering. The activation energy for the tempering processmore » has been evaluated and found to be corresponding to interstitial diffusion of carbon in ferrite matrix. The type and microchemistry of secondary phases in different regions of the weldment have been identified by analytical transmission electron microscopy. - Highlights: • Comparison of microstructural parameters in TIG, electron beam and laser welds of RAFM steel • EBSD studies to illustrate the absence of preferred orientation and identification of prior austenite grain size using phase identification map • Optimization of PWHT conditions for indigenous RAFM steel • Study of kinetics of tempering and estimation of apparent activation energy of the process.« less
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
NASA Astrophysics Data System (ADS)
Gao, Qian
For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.
Lalonde, Michel; Wells, R Glenn; Birnie, David; Ruddy, Terrence D; Wassenaar, Richard
2014-07-01
Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. About 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, Michel, E-mail: mlalonde15@rogers.com; Wassenaar, Richard; Wells, R. Glenn
2014-07-15
Purpose: Phase analysis of single photon emission computed tomography (SPECT) radionuclide angiography (RNA) has been investigated for its potential to predict the outcome of cardiac resynchronization therapy (CRT). However, phase analysis may be limited in its potential at predicting CRT outcome as valuable information may be lost by assuming that time-activity curves (TAC) follow a simple sinusoidal shape. A new method, cluster analysis, is proposed which directly evaluates the TACs and may lead to a better understanding of dyssynchrony patterns and CRT outcome. Cluster analysis algorithms were developed and optimized to maximize their ability to predict CRT response. Methods: Aboutmore » 49 patients (N = 27 ischemic etiology) received a SPECT RNA scan as well as positron emission tomography (PET) perfusion and viability scans prior to undergoing CRT. A semiautomated algorithm sampled the left ventricle wall to produce 568 TACs from SPECT RNA data. The TACs were then subjected to two different cluster analysis techniques, K-means, and normal average, where several input metrics were also varied to determine the optimal settings for the prediction of CRT outcome. Each TAC was assigned to a cluster group based on the comparison criteria and global and segmental cluster size and scores were used as measures of dyssynchrony and used to predict response to CRT. A repeated random twofold cross-validation technique was used to train and validate the cluster algorithm. Receiver operating characteristic (ROC) analysis was used to calculate the area under the curve (AUC) and compare results to those obtained for SPECT RNA phase analysis and PET scar size analysis methods. Results: Using the normal average cluster analysis approach, the septal wall produced statistically significant results for predicting CRT results in the ischemic population (ROC AUC = 0.73;p < 0.05 vs. equal chance ROC AUC = 0.50) with an optimal operating point of 71% sensitivity and 60% specificity. Cluster analysis results were similar to SPECT RNA phase analysis (ROC AUC = 0.78, p = 0.73 vs cluster AUC; sensitivity/specificity = 59%/89%) and PET scar size analysis (ROC AUC = 0.73, p = 1.0 vs cluster AUC; sensitivity/specificity = 76%/67%). Conclusions: A SPECT RNA cluster analysis algorithm was developed for the prediction of CRT outcome. Cluster analysis results produced results equivalent to those obtained from Fourier and scar analysis.« less
SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS
NASA Technical Reports Server (NTRS)
Miles, R. F.
1994-01-01
The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
Guillaume, Y C; Peyrin, E
2000-03-06
A chemometric methodology is proposed to study the separation of seven p-hydroxybenzoic esters in reversed phase liquid chromatography (RPLC). Fifteen experiments were found to be necessary to find a mathematical model which linked a novel chromatographic response function (CRF) with the column temperature, the water fraction in the mobile phase and its flow rate. The CRF optimum was determined using a new algorithm based on Glover's taboo search (TS). A flow-rate of 0.9 ml min(-1) with a water fraction of 0.64 in the ACN-water mixture and a column temperature of 10 degrees C gave the most efficient separation conditions. The usefulness of TS was compared with the pure random search (PRS) and simplex search (SS). As demonstrated by calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimisation, this procedure is generally applicable, easy to implement, derivative free, conceptually simple and could be used in the future for much more complex optimisation problems.
Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.
Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan
2013-01-01
In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
NASA Astrophysics Data System (ADS)
Chen, J.; Xi, G.; Wang, W.
2008-02-01
Detecting phase transitions in neural networks (determined or random) presents a challenging subject for phase transitions play a key role in human brain activity. In this paper, we detect numerically phase transitions in two types of random neural network(RNN) under proper parameters.
The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.
Dias, José M B; Leitao, José M N
2002-01-01
This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.
Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption
NASA Astrophysics Data System (ADS)
Mathavaraj, S.; Pandiyan, R.; Padhi, R.
2017-12-01
A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Qing; Department of Modern Physics, University of Science and Technology of China, Hefei 230026; Cheng Jianhua
In this paper we demonstrate that optimal 1{yields}M phase-covariant cloning quantum cloning is available via free dynamical evolution of spin networks. By properly designing the network and the couplings between spins, we show that optimal 1{yields}M phase-covariant cloning can be achieved if the initial state is prepared as a specific symmetric state. Especially, when M is an odd number, the optimal phase-covariant cloning can be achieved without ancillas. Moreover, we demonstrate that the same framework is capable for optimal 1{yields}2 universal cloning.
Hwang, Jin Soon; Lee, Hae Sang; Lee, Kee-Hyoung; Yoo, Han-Wook; Lee, Dae-Yeol; Suh, Byung-Kyu; Ko, Cheol Woo; Chung, Woo Yeong; Jin, Dong-Kyu; Shin, Choong Ho; Han, Heon-Seok; Han, Song; Kim, Ho-Seong
2018-06-20
To determine the optimal dose of LB03002, a sustained-release, once-weekly formulation of recombinant human growth hormone (rhGH), and to compare its efficacy and safety with daily rhGH in children with idiopathic short stature (ISS). This multicenter, randomized, open-label, phase II study included GH-naïve, prepubertal children with ISS, randomized to receive daily rhGH 0.37 mg/kg/week (control, n = 16), LB03002 0.5 mg/kg/week (n = 14), or LB03002 0.7 mg/kg/week (n = 16). The primary endpoint was height velocity (HV) change at week 26. At week 26, the least square (LS) means for HV change (cm/year) with control, LB03002 0.5 mg/kg/week, and LB03002 0.7 mg/kg/week were 5.08, 3.65, and 4.38, and the LS means for the change in height standard deviation score were 0.65, 0.49, and 0.58, respectively. The lower bound of the 90% confidence interval for the difference between LB03002 0.7 mg/kg/week and the control in the LS mean for HV change (-1.72) satisfied the noninferiority margin (-1.75). Adverse events were generally mild and short-lived. A once-weekly regimen of LB03002 0.7 mg/kg demonstrated noninferiority to the daily regimen of rhGH 0.37 mg/kg/week in terms of HV increments. LB03002 was well tolerated and its safety profile was comparable with that of daily rhGH. © 2018 S. Karger AG, Basel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tierney, J.C.; Glovan, R.J.; Witt, S.J.
1995-12-31
A four-phase experimental design was utilized to evaluate the abrasive wear and corrosion protection characteristics of VERSAlloy 50 coatings applied to AISI 4130 steel sheet. The coatings were applied with the Pressure Controlled Atomization Process (PCAP), a new thermal spray process being developed for the United States Air Force to replace hard chromium plating. Phase 1 of the design consisted of an evaluation of deposit profiles that were sprayed at five different standoff distances. Profile measurements yielded standard deviations ({sigma}) of the plume at each of the spray distances. Phase 2 consisted of a completely randomized series of eight spraymore » tests in which the track gap or distance between consecutive spray passes was varied by amounts of 0.5{sigma}, 1{sigma}, 2{sigma}, and 3{sigma}. The sprayed test coupons were then evaluated for corrosion protection, abrasive wear resistance, microhardness, and porosity. Results from Phase 2 were used to determine the best track gap or overlap for Phase 3 and Phase 4 testing. Phase 3 consisted of 22-run central composite design. The test coupons were evaluated the same as in Phase 2. Statistical analysis of Phase 3 data revealed that the optimal system operating parameters produced coatings that would either provide superior corrosion protection or resistance to abrasive wear. Phase 4 consisted of four spray tests to validate the results obtained in Phase 3. Phase 4 test coupons were again evaluated with the same analysis as in Phases 2 and 3. The validation tests indicated that PCAP system operating parameters could be controlled to produce VERSAlloy 50 coatings with superior corrosion protection or resistance to abrasive wear.« less
Yokota, T; Ogawa, T; Takahashi, S; Okami, K; Fujii, T; Tanaka, K; Iwae, S; Ota, I; Ueda, T; Monden, N; Matsuura, K; Kojima, H; Ueda, S; Sasaki, K; Fujimoto, Y; Hasegawa, Y; Beppu, T; Nishimori, H; Hirano, S; Naka, Y; Matsushima, Y; Fujii, M; Tahara, M
2017-05-05
Recent preclinical and phase I studies have reported that rebamipide decreased the severity of chemoradiotherapy-induced oral mucositis in patients with oral cancer. This placebo-controlled randomized phase II study assessed the clinical benefit of rebamipide in reducing the incidence of severe chemoradiotherapy-induced oral mucositis in patients with head and neck cancer (HNC). Patients aged 20-75 years with HNC who were scheduled to receive chemoradiotherapy were enrolled. Patients were randomized to receive rebamipide 2% liquid, rebamipide 4% liquid, or placebo. The primary endpoint was the incidence of grade ≥ 3 oral mucositis determined by clinical examination and assessed by central review according to the Common Terminology Criteria of Adverse Events version 3.0. Secondary endpoints were the time to onset of grade ≥ 3 oral mucositis and the incidence of functional impairment (grade ≥ 3) based on the evaluation by the Oral Mucositis Evaluation Committee. From April 2014 to August 2015, 97 patients with HNC were enrolled, of whom 94 received treatment. The incidence of grade ≥ 3 oral mucositis was 29% and 25% in the rebamipide 2% and 4% groups, respectively, compared with 39% in the placebo group. The proportion of patients who did not develop grade ≥ 3 oral mucositis by day 50 of treatment was 57.9% in the placebo group, whereas the proportion was 68.0% in the rebamipide 2% group and 71.3% in the rebamipide 4% group. The incidences of adverse events potentially related to the study drug were 16%, 26%, and 13% in the placebo, rebamipide 2%, and rebamipide 4% groups, respectively. There was no significant difference in treatment compliance among the groups. The present phase II study suggests that mouth washing with rebamipide may be effective and safe for patients with HNC receiving chemoradiotherapy, and 4% liquid is the optimal dose of rebamipide. ClinicalTrials.gov under the identifier NCT02085460 (the date of trial registration: March 11, 2014).
Gill, Dawn P; Blunt, Wendy; Bartol, Cassandra; Pulford, Roseanne W; De Cruz, Ashleigh; Simmavong, P Karen; Gavarkovs, Adam; Newhouse, Ian; Pearson, Erin; Ostenfeldt, Bayley; Law, Barbi; Karvinen, Kristina; Moffit, Pertice; Jones, Gareth; Watson, Cori; Zou, Guangyong; Petrella, Robert J
2017-02-07
Physical inactivity is one of the leading causes of chronic disease in Canadian adults. With less than 50% of Canadian adults reaching the recommended amount of daily physical activity, there is an urgent need for effective programs targeting this risk factor. HealtheSteps™ is a healthy lifestyle prescription program, developed from an extensive research base to address risk factors for chronic disease such as physical inactivity, sedentary behaviour and poor eating habits. HealtheSteps™ participants are provided with in-person lifestyle coaching and access to eHealth technologies delivered in community-based primary care clinics and health care organizations. To determine the effectiveness of Healthesteps™, we will conduct a 6-month pragmatic randomized controlled trial with integrated process and economic evaluations of HealtheSteps™ in 5 clinic settings in Southwestern Ontario. 110 participants will be individually randomized (1:1; stratified by site) to either the intervention (HealtheSteps™ program) or comparator (Wait-list control). There are 3 phases of the HealtheSteps™ program, lasting 6 months each. The active phase consists of bi-monthly in-person coaching with access to a full suite of eHealth technology supports. During the maintenance phase I, the in-person coaching will be removed, but participants will still have access to the full suite of eHealth technology supports. In the final stage, maintenance phase II, access to the full suite of eHealth technology supports is removed and participants only have access to publicly available resources and tools. This trial aims to determine the effectiveness of the program in increasing physical activity levels and improving other health behaviours and indicators, the acceptability of the HealtheSteps™ program, and the direct cost for each person participating in the program as well as the costs associated with delivering the program at the different community sites. These results will inform future optimization and scaling up of the program into additional community-based primary care sites. NCT02413385 (Clinicaltrials.gov). Date Registered: April 6, 2015.
Reactive and anticipatory looking in 6-month-old infants during a visual expectation paradigm.
Quan, Jeffry; Bureau, Jean-François; Abdul Malik, Adam B; Wong, Johnny; Rifkin-Graboi, Anne
2017-10-01
This article presents data from 278 six-month-old infants who completed a visual expectation paradigm in which audiovisual stimuli were first presented randomly (random phase), and then in a spatial pattern (pattern phase). Infants' eye gaze behaviour was tracked with a 60 Hz Tobii eye-tracker in order to measure two types of looking behaviour: reactive looking (i.e., latency to shift eye gaze in reaction to the appearance of stimuli) and anticipatory looking (i.e., percentage of time spent looking at the location where the next stimulus is about to appear during the inter-stimulus interval). Data pertaining to missing data and task order effects are presented. Further analyses show that infants' reactive looking was faster in the pattern phase, compared to the random phase, and their anticipatory looking increased from random to pattern phases. Within the pattern phase, infants' reactive looking showed a quadratic trend, with reactive looking time latencies peaking in the middle portion of the phase. Similarly, within the pattern phase, infants' anticipatory looking also showed a quadratic trend, with anticipatory looking peaking during the middle portion of the phase.
Shift-phase code multiplexing technique for holographic memories and optical interconnection
NASA Astrophysics Data System (ADS)
Honma, Satoshi; Muto, Shinzo; Okamoto, Atsushi
2008-03-01
Holographic technologies for optical memories and interconnection devices have been studied actively because of high storage capacity, many wiring patterns and high transmission rate. Among multiplexing techniques such as angular, phase code and wavelength-multiplexing, speckle multiplexing technique have gotten attention due to the simple optical setup having an adjustable random phase filter in only one direction. To keep simple construction and to suppress crosstalk among adjacent page data or wiring patterns for efficient holographic memories and interconnection, we have to consider about optimum randomness of the phase filter. The high randomness causes expanding an illumination area of reference beam on holographic media. On the other hands, the small randomness causes the crosstalk between adjacent hologram data. We have proposed the method of holographic multiplexing, shift-phase code multiplexing with a two-dimensional orthogonal matrix phase filter. A lot of orthogonal phase codes can be produced by shifting the phase filter in one direction. It is able to read and record the individual holograms with low crosstalk. We give the basic experimental result on holographic data multiplexing and consider the phase pattern of the filter to suppress the crosstalk between adjacent holograms sufficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W; Ding, X; Hu, Y
Purpose: To investigate how spot size and spacing affect plan quality, especially, plan robustness and the impact of interplay effect, of robustly-optimized intensity-modulated proton therapy (IMPT) plans for lung cancer. Methods: Two robustly-optimized IMPT plans were created for 10 lung cancer patients: (1) one for a proton beam with in-air energy dependent large spot size at isocenter (σ: 5–15 mm) and spacing (1.53σ); (2) the other for a proton beam with small spot size (σ: 2–6 mm) and spacing (5 mm). Both plans were generated on the average CTs with internal-gross-tumor-volume density overridden to irradiate internal target volume (ITV). Themore » root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under RVH curves were used to evaluate plan robustness. Dose evaluation software was developed to model time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Patient anatomy voxels were mapped from phase to phase via deformable image registration to score doses. Dose-volume-histogram indices including ITV coverage, homogeneity, and organs-at-risk (OAR) sparing were compared using Student-t test. Results: Compared to large spots, small spots resulted in significantly better OAR sparing with comparable ITV coverage and homogeneity in the nominal plan. Plan robustness was comparable for ITV and most OARs. With interplay effect considered, significantly better OAR sparing with comparable ITV coverage and homogeneity is observed using smaller spots. Conclusion: Robust optimization with smaller spots significantly improves OAR sparing with comparable plan robustness and similar impact of interplay effect compare to larger spots. Small spot size requires the use of larger number of spots, which gives optimizer more freedom to render a plan more robust. The ratio between spot size and spacing was found to be more relevant to determine plan robustness and the impact of interplay effect than spot size alone. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant, and by The Kemper Marley Foundation.« less
Fundamental Limits of Delay and Security in Device-to-Device Communication
2013-01-01
systematic MDS (maximum distance separable) codes and random binning strategies that achieve a Pareto optimal delayreconstruction tradeoff. The erasure MD...file, and a coding scheme based on erasure compression and Slepian-Wolf binning is presented. The coding scheme is shown to provide a Pareto optimal...ble) codes and random binning strategies that achieve a Pareto optimal delay- reconstruction tradeoff. The erasure MD setup is then used to propose a
Sofi, Francesco; Dinu, Monica; Pagliai, Giuditta; Cesari, Francesca; Marcucci, Rossella; Casini, Alessandro
2016-05-04
Nutrition is able to alter the cardiovascular health of the general population. However, the optimal dietary strategy for cardiovascular disease prevention is still far from being defined. Mediterranean and vegetarian diets are those reporting the greatest grade of evidence in the literature, but no experimental studies comparing these two dietary patterns are available. This is an open randomized crossover clinical trial including healthy subjects with a low-to-medium cardiovascular risk profile, characterized by being overweight and by the presence of at least an additional metabolic risk factor (abdominal obesity, high total cholesterol, high LDL cholesterol, high triglycerides, impaired glucose fasting levels) but free from medications. A total of 100 subjects will be included and randomly assigned to two groups: Mediterranean calorie-restricted diet (n = 50) and vegetarian calorie-restricted diet (n = 50). The intervention phases will last 3 months each, and at the end of intervention phase I the groups will be crossed over. The two diets will be isocaloric and of three different sizes (1400 - 1600 - 1800 kcal/day), according to specific energy requirements. Adherence to the dietary intervention will be established through questionnaires and 24-h dietary recall. Anthropometric measurements, body composition, blood samples and stool samples will be obtained from each participant at the beginning and at the end of each intervention phase. The primary outcome measure will be change in weight from baseline. The secondary outcome measures will be variations of anthropometric and bioelectrical impedance variables as well as traditional and innovative cardiovascular biomarkers. Despite all the data supporting the efficacy of Mediterranean and vegetarian diets on the prevention of cardiovascular diseases, no studies have directly compared these two dietary profiles. The trial will test whether there are statistically significant differences between these dietary profiles in reducing the cardiovascular risk burden for the general population. ClinicalTrials.gov NCT02641834.
Optimal control of raw timber production processes
Ivan Kolenka
1978-01-01
This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...
Symmetry breaking in tensor models
NASA Astrophysics Data System (ADS)
Benedetti, Dario; Gurau, Razvan
2015-11-01
In this paper we analyze a quartic tensor model with one interaction for a tensor of arbitrary rank. This model has a critical point where a continuous limit of infinitely refined random geometries is reached. We show that the critical point corresponds to a phase transition in the tensor model associated to a breaking of the unitary symmetry. We analyze the model in the two phases and prove that, in a double scaling limit, the symmetric phase corresponds to a theory of infinitely refined random surfaces, while the broken phase corresponds to a theory of infinitely refined random nodal surfaces. At leading order in the double scaling limit planar surfaces dominate in the symmetric phase, and planar nodal surfaces dominate in the broken phase.
Daily Rifapentine for Treatment of Pulmonary Tuberculosis. A Randomized, Dose-Ranging Trial
Savic, Radojka M.; Goldberg, Stefan; Stout, Jason E.; Schluger, Neil; Muzanyi, Grace; Johnson, John L.; Nahid, Payam; Hecker, Emily J.; Heilig, Charles M.; Bozeman, Lorna; Feng, Pei-Jean I.; Moro, Ruth N.; MacKenzie, William; Dooley, Kelly E.; Nuermberger, Eric L.; Vernon, Andrew; Weiner, Marc
2015-01-01
Rationale: Rifapentine has potent activity in mouse models of tuberculosis chemotherapy but its optimal dose and exposure in humans are unknown. Objectives: We conducted a randomized, partially blinded dose-ranging study to determine tolerability, safety, and antimicrobial activity of daily rifapentine for pulmonary tuberculosis treatment. Methods: Adults with sputum smear-positive pulmonary tuberculosis were assigned rifapentine 10, 15, or 20 mg/kg or rifampin 10 mg/kg daily for 8 weeks (intensive phase), with isoniazid, pyrazinamide, and ethambutol. The primary tolerability end point was treatment discontinuation. The primary efficacy end point was negative sputum cultures at completion of intensive phase. Measurements and Main Results: A total of 334 participants were enrolled. At completion of intensive phase, cultures on solid media were negative in 81.3% of participants in the rifampin group versus 92.5% (P = 0.097), 89.4% (P = 0.29), and 94.7% (P = 0.049) in the rifapentine 10, 15, and 20 mg/kg groups. Liquid cultures were negative in 56.3% (rifampin group) versus 74.6% (P = 0.042), 69.7% (P = 0.16), and 82.5% (P = 0.004), respectively. Compared with the rifampin group, the proportion negative at the end of intensive phase was higher among rifapentine recipients who had high rifapentine areas under the concentration–time curve. Percentages of participants discontinuing assigned treatment for reasons other than microbiologic ineligibility were similar across groups (rifampin, 8.2%; rifapentine 10, 15, or 20 mg/kg, 3.4, 2.5, and 7.4%, respectively). Conclusions: Daily rifapentine was well-tolerated and safe. High rifapentine exposures were associated with high levels of sputum sterilization at completion of intensive phase. Further studies are warranted to determine if regimens that deliver high rifapentine exposures can shorten treatment duration to less than 6 months. Clinical trial registered with www.clinicaltrials.gov (NCT 00694629). PMID:25489785
Daily rifapentine for treatment of pulmonary tuberculosis. A randomized, dose-ranging trial.
Dorman, Susan E; Savic, Radojka M; Goldberg, Stefan; Stout, Jason E; Schluger, Neil; Muzanyi, Grace; Johnson, John L; Nahid, Payam; Hecker, Emily J; Heilig, Charles M; Bozeman, Lorna; Feng, Pei-Jean I; Moro, Ruth N; MacKenzie, William; Dooley, Kelly E; Nuermberger, Eric L; Vernon, Andrew; Weiner, Marc
2015-02-01
Rifapentine has potent activity in mouse models of tuberculosis chemotherapy but its optimal dose and exposure in humans are unknown. We conducted a randomized, partially blinded dose-ranging study to determine tolerability, safety, and antimicrobial activity of daily rifapentine for pulmonary tuberculosis treatment. Adults with sputum smear-positive pulmonary tuberculosis were assigned rifapentine 10, 15, or 20 mg/kg or rifampin 10 mg/kg daily for 8 weeks (intensive phase), with isoniazid, pyrazinamide, and ethambutol. The primary tolerability end point was treatment discontinuation. The primary efficacy end point was negative sputum cultures at completion of intensive phase. A total of 334 participants were enrolled. At completion of intensive phase, cultures on solid media were negative in 81.3% of participants in the rifampin group versus 92.5% (P = 0.097), 89.4% (P = 0.29), and 94.7% (P = 0.049) in the rifapentine 10, 15, and 20 mg/kg groups. Liquid cultures were negative in 56.3% (rifampin group) versus 74.6% (P = 0.042), 69.7% (P = 0.16), and 82.5% (P = 0.004), respectively. Compared with the rifampin group, the proportion negative at the end of intensive phase was higher among rifapentine recipients who had high rifapentine areas under the concentration-time curve. Percentages of participants discontinuing assigned treatment for reasons other than microbiologic ineligibility were similar across groups (rifampin, 8.2%; rifapentine 10, 15, or 20 mg/kg, 3.4, 2.5, and 7.4%, respectively). Daily rifapentine was well-tolerated and safe. High rifapentine exposures were associated with high levels of sputum sterilization at completion of intensive phase. Further studies are warranted to determine if regimens that deliver high rifapentine exposures can shorten treatment duration to less than 6 months. Clinical trial registered with www.clinicaltrials.gov (NCT 00694629).
Evaluation of type II thyroplasty on phonatory physiology in an excised canine larynx model
Devine, Erin E.; Hoffman, Matthew R.; McCulloch, Timothy M.; Jiang, Jack J.
2016-01-01
Objective Type II thyroplasty is an alternative treatment for spasmodic dysphonia, addressing hyperadduction by incising and lateralizing the thyroid cartilage. We quantified the effect of lateralization width on phonatory physiology using excised canine larynges. Methods Normal closure, hyperadduction, and type II thyroplasty (lateralized up to 5mm at 1mm increments with hyperadducted arytenoids) were simulated in excised larynges (N=7). Aerodynamic, acoustic, and videokymographic data were recorded at three subglottal pressures relative to phonation threshold pressure (PTP). One-way repeated measures ANOVA assessed effect of condition on aerodynamic parameters. Random intercepts linear mixed effects models assessed effects of condition and subglottal pressure on acoustic and videokymographic parameters. Results PTP differed across conditions (p<0.001). Condition affected percent shimmer (p<0.005) but not percent jitter. Both pressure (p<0.03) and condition (p<0.001) affected fundamental frequency. Pressure affected vibratory amplitude (p<0.05) and intra-fold phase difference (p<0.05). Condition affected phase difference between the vocal folds (p<0.001). Conclusions Hyperadduction increased PTP and worsened perturbation compared to normal, with near normal physiology restored with 1mm lateralization. Further lateralization deteriorated voice quality and increased PTP. Acoustic and videokymographic results indicate that normal physiologic relationships between subglottal pressure and vibration are preserved at optimal lateralization width, but then degrade with further lateralization. The 1mm optimal width observed here is due to the small canine larynx size. Future human trials would likely demonstrate a greater optimal width, with patient-specific value potentially determined based on larynx size and symptom severity. PMID:27223665
Aircraft adaptive learning control
NASA Technical Reports Server (NTRS)
Lee, P. S. T.; Vanlandingham, H. F.
1979-01-01
The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
High power transcranial beam steering for ultrasonic brain therapy
Pernot, Mathieu; Aubry, Jean-François; Tanter, Mickaël; Thomas, Jean-Louis; Fink, Mathias
2003-01-01
A sparse phased array is specially designed for non-invasive ultrasound transskull brain therapy. The array is made of 200 single-elements corresponding to a new generation of high power transducers developed in collaboration with Imasonic (Besançon, France). Each element has a surface of 0.5cm2 and works at 0.9 MHz central frequency with a maximum 20W.cm−2 intensity on the transducer surface. In order to optimize the steering capabilities of the array, several transducers distributions on a spherical surface are simulated: hexagonal, annular, and quasi-random distributions. Using a quasi-random distribution significantly reduces the grating lobes. Furthermore, the simulations show the capability of the quasi-random array to electronically move the focal spot in the vicinity of the geometrical focus (up to +/− 15 mm). Based on the simulation study, the array is constructed and tested. The skull aberrations are corrected by using a time reversal mirror with amplitude correction achieved thanks to an implantable hydrophone, and a sharp focus is obtained through a human skull. Several lesions are induced in fresh liver and brain samples through human skulls, demonstrating the accuracy and the steering capabilities of the system. PMID:12974575
Binaural comodulation masking release: Effects of masker interaural correlation
Hall, Joseph W.; Buss, Emily; Grose, John H.
2007-01-01
Binaural detection was examined for a signal presented in a narrow band of noise centered on the on-signal masking band (OSB) or in the presence of flanking noise bands that were random or comodulated with respect to the OSB. The noise had an interaural correlation of 1.0 (No), 0.99 or 0.95. In No noise, random flanking bands worsened Sπ detection and comodulated bands improved Sπ detection for some listeners but had no effect for other listeners. For the 0.99 or 0.95 interaural correlation conditions, random flanking bands were less detrimental to Sπ detection and comodulated flanking bands improved Sπ detection for all listeners. Analyses based on signal detection theory indicated that the improvement in Sπ thresholds obtained with comodulated bands was not compatible with an optimal combination of monaural and binaural cues or to across-frequency analyses of dynamic interaural phase differences. Two accounts consistent with the improvement in Sπ thresholds in comodulated noise were (1) envelope information carried by the flanking bands improves the weighting of binaural cues associated with the signal; (2) the auditory system is sensitive to across-frequency differences in ongoing interaural correlation. PMID:17225415
Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium
NASA Astrophysics Data System (ADS)
Dahmen, David; Bos, Hannah; Helias, Moritz
2016-07-01
Randomly coupled Ising spins constitute the classical model of collective phenomena in disordered systems, with applications covering glassy magnetism and frustration, combinatorial optimization, protein folding, stock market dynamics, and social dynamics. The phase diagram of these systems is obtained in the thermodynamic limit by averaging over the quenched randomness of the couplings. However, many applications require the statistics of activity for a single realization of the possibly asymmetric couplings in finite-sized networks. Examples include reconstruction of couplings from the observed dynamics, representation of probability distributions for sampling-based inference, and learning in the central nervous system based on the dynamic and correlation-dependent modification of synaptic connections. The systematic cumulant expansion for kinetic binary (Ising) threshold units with strong, random, and asymmetric couplings presented here goes beyond mean-field theory and is applicable outside thermodynamic equilibrium; a system of approximate nonlinear equations predicts average activities and pairwise covariances in quantitative agreement with full simulations down to hundreds of units. The linearized theory yields an expansion of the correlation and response functions in collective eigenmodes, leads to an efficient algorithm solving the inverse problem, and shows that correlations are invariant under scaling of the interaction strengths.
High power transcranial beam steering for ultrasonic brain therapy
NASA Astrophysics Data System (ADS)
Pernot, M.; Aubry, J.-F.; Tanter, M.; Thomas, J.-L.; Fink, M.
2003-08-01
A sparse phased array is specially designed for non-invasive ultrasound transskull brain therapy. The array is made of 200 single elements corresponding to a new generation of high power transducers developed in collaboration with Imasonic (Besançon, France). Each element has a surface of 0.5 cm2 and works at 0.9 MHz central frequency with a maximum 20 W cm-2 intensity on the transducer surface. In order to optimize the steering capabilities of the array, several transducer distributions on a spherical surface are simulated: hexagonal, annular and quasi-random distributions. Using a quasi-random distribution significantly reduces the grating lobes. Furthermore, the simulations show the capability of the quasi-random array to electronically move the focal spot in the vicinity of the geometrical focus (up to +/-15 mm). Based on the simulation study, the array is constructed and tested. The skull aberrations are corrected by using a time reversal mirror with amplitude correction achieved thanks to an implantable hydrophone, and a sharp focus is obtained through a human skull. Several lesions are induced in fresh liver and brain samples through human skulls, demonstrating the accuracy and the steering capabilities of the system.
Explanatory Versus Pragmatic Trials: An Essential Concept in Study Design and Interpretation.
Merali, Zamir; Wilson, Jefferson R
2017-11-01
Randomized clinical trials often represent the highest level of clinical evidence available to evaluate the efficacy of an intervention in clinical medicine. Although the process of randomization serves to maximize internal validity, the external validity, or generalizability, of such studies depends on several factors determined at the design phase of the trial including eligibility criteria, study setting, and outcomes of interest. In general, explanatory trials are optimized to demonstrate the efficacy of an intervention in a highly selected patient group; however, findings from these studies may not be generalizable to the larger clinical problem. In contrast, pragmatic trials attempt to understand the real-world benefit of an intervention by incorporating design elements that allow for greater generalizability and clinical applicability of study results. In this article we describe the explanatory-pragmatic continuum for clinical trials in greater detail. Further, a well-accepted tool for grading trials on this continuum is described, and applied, to 2 recently published trials pertaining to the surgical management of lumbar degenerative spondylolisthesis.
Strojan, Primož; Vermorken, Jan B; Beitler, Jonathan J; Saba, Nabil F; Haigentz, Missak; Bossi, Paolo; Worden, Francis P; Langendijk, Johannes A; Eisbruch, Avraham; Mendenhall, William M; Lee, Anne W M; Harrison, Louis B; Bradford, Carol R; Smee, Robert; Silver, Carl E; Rinaldo, Alessandra; Ferlito, Alfio
2016-04-01
The optimal cumulative dose and timing of cisplatin administration in various concurrent chemoradiotherapy protocols for nonmetastatic head and neck squamous cell carcinoma (HNSCC) has not been determined. The absolute survival benefit at 5 years of concurrent chemoradiotherapy protocols versus radiotherapy alone observed in prospective randomized trials reporting on the use of cisplatin monochemotherapy for nonnasopharyngeal HNSCC was extracted. In the case of nonrandomized studies, the outcome results at 2 years were compared between groups of patients receiving different cumulative cisplatin doses. Eleven randomized trials and 7 nonrandomized studies were identified. In 6 definitive radiotherapy phase III trials, a statistically significant association (p = .027) between cumulative cisplatin dose, independent of the schedule, and overall survival benefit was observed for higher doses. Results support the conclusion that the cumulative dose of cisplatin in concurrent chemoradiation protocols for HNSCC has a significant positive correlation with survival. © 2015 Wiley Periodicals, Inc. Head Neck 38: E2151-E2158, 2016. © 2015 Wiley Periodicals, Inc.
Theory of Random Copolymer Fractionation in Columns
NASA Astrophysics Data System (ADS)
Enders, Sabine
Random copolymers show polydispersity both with respect to molecular weight and with respect to chemical composition, where the physical and chemical properties depend on both polydispersities. For special applications, the two-dimensional distribution function must adjusted to the application purpose. The adjustment can be achieved by polymer fractionation. From the thermodynamic point of view, the distribution function can be adjusted by the successive establishment of liquid-liquid equilibria (LLE) for suitable solutions of the polymer to be fractionated. The fractionation column is divided into theoretical stages. Assuming an LLE on each theoretical stage, the polymer fractionation can be modeled using phase equilibrium thermodynamics. As examples, simulations of stepwise fractionation in one direction, cross-fractionation in two directions, and two different column fractionations (Baker-Williams fractionation and continuous polymer fractionation) have been investigated. The simulation delivers the distribution according the molecular weight and chemical composition in every obtained fraction, depending on the operative properties, and is able to optimize the fractionation effectively.
Skorupski, K A; Uhl, J M; Szivek, A; Allstadt Frazier, S D; Rebhun, R B; Rodriguez, C O
2016-03-01
Despite numerous published studies describing adjuvant chemotherapy for canine appendicular osteosarcoma, there is no consensus as to the optimal chemotherapy protocol. The purpose of this study was to determine whether either of two protocols would be associated with longer disease-free interval (DFI) in dogs with appendicular osteosarcoma following amputation. Dogs with histologically confirmed appendicular osteosarcoma that were free of gross metastases and underwent amputation were eligible for enrollment. Dogs were randomized to receive either six doses of carboplatin or three doses each of carboplatin and doxorubicin on an alternating schedule. Fifty dogs were included. Dogs receiving carboplatin alone had a significantly longer DFI (425 versus 135 days) than dogs receiving alternating carboplatin and doxorubicin (P = 0.04). Toxicity was similar between groups. These results suggest that six doses of carboplatin may be associated superior DFI when compared to six total doses of carboplatin and doxorubicin. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Sun, Shi-Hai; Liang, Lin-Mei
2012-08-01
Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.
Random-phase metasurfaces at optical wavelengths
NASA Astrophysics Data System (ADS)
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2016-06-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.
Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G.
2013-01-01
Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination, and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners—especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information, and could be due to degraded phase-locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to bandpass-filtered harmonic and inharmonic complex tones were measured in chinchillas with either normal hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross-correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase-locking remained intact. These results suggest that perceptual “TFS-processing” deficits do not simply reflect degraded phase-locking at the level of the AN. To the extent that performance in F0 and harmonic/inharmonic discrimination tasks depend on TFS cues, it is likely through a more complicated (sub-optimal) decoding mechanism, which may involve “spatiotemporal” (place-time) neural representations. PMID:23716215
Comparison of optimization algorithms for the slow shot phase in HPDC
NASA Astrophysics Data System (ADS)
Frings, Markus; Berkels, Benjamin; Behr, Marek; Elgeti, Stefanie
2018-05-01
High-pressure die casting (HPDC) is a popular manufacturing process for aluminum processing. The slow shot phase in HPDC is the first phase of this process. During this phase, the molten metal is pushed towards the cavity under moderate plunger movement. The so-called shot curve describes this plunger movement. A good design of the shot curve is important to produce high-quality cast parts. Three partially competing process goals characterize the slow shot phase: (1) reducing air entrapment, (2) avoiding temperature loss, and (3) minimizing oxide caused by the air-aluminum contact. Due to the rough process conditions with high pressure and temperature, it is hard to design the shot curve experimentally. There exist a few design rules that are based on theoretical considerations. Nevertheless, the quality of the shot curve design still depends on the experience of the machine operator. To improve the shot curve it seems to be natural to use numerical optimization. This work compares different optimization strategies for the slow shot phase optimization. The aim is to find the best optimization approach on a simple test problem.
The random field Blume-Capel model revisited
NASA Astrophysics Data System (ADS)
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
NASA Astrophysics Data System (ADS)
Sakata, Ayaka; Xu, Yingying
2018-03-01
We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \
Modeling, simulation, and estimation of optical turbulence
NASA Astrophysics Data System (ADS)
Formwalt, Byron Paul
This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Atlas ranking and selection for automatic segmentation of the esophagus from CT scans
NASA Astrophysics Data System (ADS)
Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence
2017-12-01
In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).
A dose optimization method for electron radiotherapy using randomized aperture beams
NASA Astrophysics Data System (ADS)
Engel, Konrad; Gauer, Tobias
2009-09-01
The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Templated Sphere Phase Liquid Crystals for Tunable Random Lasing
Chen, Ziping; Hu, Dechun; Chen, Xingwu; Zeng, Deren; Lee, Yungjui; Chen, Xiaoxian; Lu, Jiangang
2017-01-01
A sphere phase liquid crystal (SPLC) composed of three-dimensional twist structures with disclinations among them exists between isotropic phase and blue phase in a very narrow temperature range, about several degrees centigrade. A low concentration polymer template is applied to improve the thermal stability of SPLCs and broadens the temperature range to more than 448 K. By template processing, a wavelength tunable random lasing is demonstrated with dye doped SPLC. With different polymer concentrations, the reconstructed SPLC random lasing may achieve more than 40 nm wavelength continuous shifting by electric field modulation. PMID:29140283
Comparison of genetic algorithm methods for fuel management optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-12-31
The CIGARO system was developed for genetic algorithm fuel management optimization. Tests are performed to find the best fuel location swap mutation operator probability and to compare genetic algorithm to a truly random search method. Tests showed the fuel swap probability should be between 0% and 10%, and a 50% definitely hampered the optimization. The genetic algorithm performed significantly better than the random search method, which did not even satisfy the peak normalized power constraint.
A forced titration study of the antioxidant and immunomodulatory effects of Ambrotose AO supplement
2010-01-01
Background Oxidative stress plays a role in acute and chronic inflammatory disease and antioxidant supplementation has demonstrated beneficial effects in the treatment of these conditions. This study was designed to determine the optimal dose of an antioxidant supplement in healthy volunteers to inform a Phase 3 clinical trial. Methods The study was designed as a combined Phase 1 and 2 open label, forced titration dose response study in healthy volunteers (n = 21) to determine both acute safety and efficacy. Participants received a dietary supplement in a forced titration over five weeks commencing with a no treatment baseline through 1, 2, 4 and 8 capsules. The primary outcome measurement was ex vivo changes in serum oxygen radical absorbance capacity (ORAC). The secondary outcome measures were undertaken as an exploratory investigation of immune function. Results A significant increase in antioxidant activity (serum ORAC) was observed between baseline (no capsules) and the highest dose of 8 capsules per day (p = 0.040) representing a change of 36.6%. A quadratic function for dose levels was fitted in order to estimate a dose response curve for estimating the optimal dose. The quadratic component of the curve was significant (p = 0.047), with predicted serum ORAC scores increasing from the zero dose to a maximum at a predicted dose of 4.7 capsules per day and decreasing for higher doses. Among the secondary outcome measures, a significant dose effect was observed on phagocytosis of granulocytes, and a significant increase was also observed on Cox 2 expression. Conclusion This study suggests that Ambrotose AO® capsules appear to be safe and most effective at a dosage of 4 capsules/day. It is important that this study is not over interpreted; it aimed to find an optimal dose to assess the dietary supplement using a more rigorous clinical trial design. The study achieved this aim and demonstrated that the dietary supplement has the potential to increase antioxidant activity. The most significant limitation of this study was that it was open label Phase 1/Phase 2 trial and is subject to potential bias that is reduced with the use of randomization and blinding. To confirm the benefits of this dietary supplement these effects now need to be demonstrated in a Phase 3 randomised controlled trial (RCT). Trial Registration Australian and New Zealand Clinical Trials Register: ACTRN12605000258651 PMID:20433711
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-03-01
Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Dispositional Optimism and Therapeutic Expectations in Early Phase Oncology Trials
Jansen, Lynn A.; Mahadevan, Daruka; Appelbaum, Paul S.; Klein, William MP; Weinstein, Neil D.; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P.
2016-01-01
Purpose Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early phase oncology trials. Optimism, however, is not a unitary construct – it can also be defined as a general disposition, or what is called dispositional optimism. We assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. We also assessed how dispositional optimism related to unrealistic optimism. Methods Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Results Dispositional optimism was significantly associated with higher expectations for personal therapeutic benefit (Spearman r=0.333, p<0.0001), but was not associated with the therapeutic misconception. (Spearman r=−0.075, p=0.329). Dispositional optimism was weakly associated with unrealistic optimism (Spearman r=0.215, p=0.005). In multivariate analysis, both dispositional optimism (p=0.02) and unrealistic optimism (p<0.0001) were independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (p=.0001), but not dispositional optimism, was independently associated with the therapeutic misconception. Conclusion High expectations for therapeutic benefit among patient-subjects in early phase oncology trials should not be assumed to result from misunderstanding of specific information about the trials. Our data reveal that these expectations are associated with either a dispositionally positive outlook on life or biased expectations about specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early phase oncology research. PMID:26882017
Dispositional optimism and therapeutic expectations in early-phase oncology trials.
Jansen, Lynn A; Mahadevan, Daruka; Appelbaum, Paul S; Klein, William M P; Weinstein, Neil D; Mori, Motomi; Daffé, Racky; Sulmasy, Daniel P
2016-04-15
Prior research has identified unrealistic optimism as a bias that might impair informed consent among patient-subjects in early-phase oncology trials. However, optimism is not a unitary construct; it also can be defined as a general disposition, or what is called dispositional optimism. The authors assessed whether dispositional optimism would be related to high expectations for personal therapeutic benefit reported by patient-subjects in these trials but not to the therapeutic misconception. The authors also assessed how dispositional optimism related to unrealistic optimism. Patient-subjects completed questionnaires designed to measure expectations for therapeutic benefit, dispositional optimism, unrealistic optimism, and the therapeutic misconception. Dispositional optimism was found to be significantly associated with higher expectations for personal therapeutic benefit (Spearman rank correlation coefficient [r], 0.333; P<.0001), but was not associated with the therapeutic misconception (Spearman r, -0.075; P = .329). Dispositional optimism was found to be weakly associated with unrealistic optimism (Spearman r, 0.215; P = .005). On multivariate analysis, both dispositional optimism (P = .02) and unrealistic optimism (P<.0001) were found to be independently associated with high expectations for personal therapeutic benefit. Unrealistic optimism (P = .0001), but not dispositional optimism, was found to be independently associated with the therapeutic misconception. High expectations for therapeutic benefit among patient-subjects in early-phase oncology trials should not be assumed to result from misunderstanding of specific information regarding the trials. The data from the current study indicate that these expectations are associated with either a dispositionally positive outlook on life or biased expectations concerning specific aspects of trial participation. Not all manifestations of optimism are the same, and different types of optimism likely have different consequences for informed consent in early-phase oncology research. © 2016 American Cancer Society.
Local random configuration-tree theory for string repetition and facilitated dynamics of glass
NASA Astrophysics Data System (ADS)
Lam, Chi-Hang
2018-02-01
We derive a microscopic theory of glassy dynamics based on the transport of voids by micro-string motions, each of which involves particles arranged in a line hopping simultaneously displacing one another. Disorder is modeled by a random energy landscape quenched in the configuration space of distinguishable particles, but transient in the physical space as expected for glassy fluids. We study the evolution of local regions with m coupled voids. At a low temperature, energetically accessible local particle configurations can be organized into a random tree with nodes and edges denoting configurations and micro-string propagations respectively. Such trees defined in the configuration space naturally describe systems defined in two- or three-dimensional physical space. A micro-string propagation initiated by a void can facilitate similar motions by other voids via perturbing the random energy landscape, realizing path interactions between voids or equivalently string interactions. We obtain explicit expressions of the particle diffusion coefficient and a particle return probability. Under our approximation, as temperature decreases, random trees of energetically accessible configurations exhibit a sequence of percolation transitions in the configuration space, with local regions containing fewer coupled voids entering the non-percolating immobile phase first. Dynamics is dominated by coupled voids of an optimal group size, which increases as temperature decreases. Comparison with a distinguishable-particle lattice model (DPLM) of glass shows very good quantitative agreements using only two adjustable parameters related to typical energy fluctuations and the interaction range of the micro-strings.
Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João
2009-12-31
Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.
Tools for Material Design and Selection
NASA Astrophysics Data System (ADS)
Wehage, Kristopher
The present thesis focuses on applications of numerical methods to create tools for material characterization, design and selection. The tools generated in this work incorporate a variety of programming concepts, from digital image analysis, geometry, optimization, and parallel programming to data-mining, databases and web design. The first portion of the thesis focuses on methods for characterizing clustering in bimodal 5083 Aluminum alloys created by cryomilling and powder metallurgy. The bimodal samples analyzed in the present work contain a mixture of a coarse grain phase, with a grain size on the order of several microns, and an ultra-fine grain phase, with a grain size on the order of 200 nm. The mixing of the two phases is not homogeneous and clustering is observed. To investigate clustering in these bimodal materials, various microstructures were created experimentally by conventional cryomilling, Hot Isostatic Pressing (HIP), Extrusion, Dual-Mode Dynamic Forging (DMDF) and a new 'Gradient' cryomilling process. Two techniques for quantitative clustering analysis are presented, formulated and implemented. The first technique, the Area Disorder function, provides a metric of the quality of coarse grain dispersion in an ultra-fine grain matrix and the second technique, the Two-Point Correlation function, provides a metric of long and short range spatial arrangements of the two phases, as well as an indication of the mean feature size in any direction. The two techniques are implemented on digital images created by Scanning Electron Microscopy (SEM) and Electron Backscatter Detection (EBSD) of the microstructures. To investigate structure--property relationships through modeling and simulation, strategies for generating synthetic microstructures are discussed and a computer program that generates randomized microstructures with desired configurations of clustering described by the Area Disorder Function is formulated and presented. In the computer program, two-dimensional microstructures are generated by Random Sequential Adsorption (RSA) of voxelized ellipses representing the coarse grain phase. A simulated annealing algorithm is used to geometrically optimize the placement of the ellipses in the model to achieve varying user-defined configurations of spatial arrangement of the coarse grains. During the simulated annealing process, the ellipses are allowed to overlap up to a specified threshold, allowing triple junctions to form in the model. Once the simulated annealing process is complete, the remaining space is populated by smaller ellipses representing the ultra-fine grain phase. Uniform random orientations are assigned to the grains. The program generates text files that can be imported in to Crystal Plasticity Finite Element Analysis Software for stress analysis. Finally, numerical methods and programming are applied to current issues in green engineering and hazard assessment. To understand hazards associated with materials and select safer alternatives, engineers and designers need access to up-to-date hazard information. However, hazard information comes from many disparate sources and aggregating, interpreting and taking action on the wealth of data is not trivial. In light of these challenges, a Framework for Automated Hazard Assessment based on the GreenScreen list translator is presented. The framework consists of a computer program that automatically extracts data from the GHS-Japan hazard database, loads the data into a machine-readable JSON format, transforms the JSON document in to a GreenScreen JSON document using the GreenScreen List Translator v1.2 and performs GreenScreen Benchmark scoring on the material. The GreenScreen JSON documents are then uploaded to a document storage system to allow human operators to search for, modify or add additional hazard information via a web interface.
Cook, Andrea J; Wellman, Robert D; Cherkin, Daniel C; Kahn, Janet R; Sherman, Karen J
2015-10-01
This is the first study to systematically evaluate the value of a longer treatment period for massage. We provide a framework of how to conceptualize an optimal dose in this challenging setting of nonpharmacologic treatments. The aim was to determine the optimal dose of massage for neck pain. Two-phase randomized trial for persons with chronic nonspecific neck pain. Primary randomization to one of five groups receiving 4 weeks of massage (30 minutes 2x/or 3x/wk or 60 minutes 1x, 2x, or 3x/wk). Booster randomization of participants to receive an additional six massages, 60 minutes 1x/wk, or no additional massage. A total of 179 participants from Group Health and the general population of Seattle, WA, USA recruited between June 2010 and August 2011 were included. Primary outcomes self-reported neck-related dysfunction (Neck Disability Index) and pain (0-10 scale) were assessed at baseline, 12, and 26 weeks. Clinically meaningful improvement was defined as greater than or equal to 5-point decrease in dysfunction and greater than or equal to 30% decrease in pain from baseline. Clinically meaningful improvement for each primary outcome with both follow-up times was analyzed using adjusted modified Poisson generalized estimating equations (GEEs). Secondary analyses for the continuous outcomes used linear GEEs. There were no observed differences by primary treatment group at 12 or 26 weeks. Those receiving booster dose had improvements in both dysfunction and pain at 12 weeks (dysfunction: relative risk [RR]=1.56 [1.08-2.25], p=.018; pain: RR=1.25 [0.98-1.61], p=.077), but those were nonsignificant at 26 weeks (dysfunction: RR=1.22 [0.85-1.74]; pain: RR=1.09 [0.82-1.43]). Subgroup analysis by primary and booster treatments found the booster dose only effective among those initially randomized to one of the 60-minute massage groups. "Booster" doses for those initially receiving 60 minutes of massage should be incorporated into future trials of massage for chronic neck pain. Copyright © 2015 Elsevier Inc. All rights reserved.
Nagahama, Yuki; Shimobaba, Tomoyoshi; Kakue, Takashi; Masuda, Nobuyuki; Ito, Tomoyoshi
2017-05-01
A holographic projector utilizes holography techniques. However, there are several barriers to realizing holographic projections. One is deterioration of hologram image quality caused by speckle noise and ringing artifacts. The combination of the random phase-free method and the Gerchberg-Saxton (GS) algorithm has improved the image quality of holograms. However, the GS algorithm requires significant computation time. We propose faster methods for image quality improvement of random phase-free holograms using the characteristics of ringing artifacts.
Skoog, Gunilla; Edlund, Charlotta; Giske, Christian G; Mölstad, Sigvard; Norman, Christer; Sundvall, Pär-Daniel; Hedin, Katarina
2016-09-13
In 2014 the Swedish government assigned to The Public Health Agency of Sweden to conduct studies to evaluate optimal use of existing antibiotic agents. The aim is to optimize drug use and dosing regimens to improve the clinical efficacy. The present study was selected following a structured prioritizing process by independent experts. This phase IV study is a randomized, open-label, multicenter study with non-inferiority design regarding the therapeutic use of penicillin V with two parallel groups. The overall aim is to study if the total exposure with penicillin V can be reduced from 1000 mg three times daily for 10 days to 800 mg four times daily for 5 days when treating Streptococcus pyogenes (Lancefield group A) pharyngotonsillitis. Patients will be recruited from 17 primary health care centers in Sweden. Adult men and women, youth and children ≥6 years of age who consult for sore throat and is judged to have a pharyngotonsillitis, with 3-4 Centor criteria and a positive rapid test for group A streptococci, will be included in the study. The primary outcome is clinical cure 5-7 days after discontinuation of antibiotic treatment. Follow-up controls will be done by telephone after 1 and 3 months. Throat symptoms, potential relapses and complications will be monitored, as well as adverse events. Patients (n = 432) will be included during 2 years. In the era of increasing antimicrobial resistance and the shortage of new antimicrobial agents it is necessary to revisit optimal usage of old antibiotics. Old antimicrobial drugs are often associated with inadequate knowledge on pharmacokinetics and pharmacodynamics and lack of optimized dosing regimens based on randomized controlled clinical trials. If a shorter and more potent treatment regimen is shown to be equivalent with the normal 10 day regimen this can imply great advantages for both patients (adherence, adverse events, resistance) and the community (resistance, drug costs). EudraCT number 2015-001752-30 . Protocol FoHM/Tonsillit2015 date 22 June 2015, version 2. Approved by MPA of Sweden 3 July 2015, Approved by Regional Ethical Review Board in Lund, 25 June 2015.
Optimizing random searches on three-dimensional lattices
NASA Astrophysics Data System (ADS)
Yang, Benhao; Yang, Shunkun; Zhang, Jiaquan; Li, Daqing
2018-07-01
Search is a universal behavior related to many types of intelligent individuals. While most studies have focused on search in two or infinite-dimensional space, it is still missing how search can be optimized in three-dimensional space. Here we study random searches on three-dimensional (3d) square lattices with periodic boundary conditions, and explore the optimal search strategy with a power-law step length distribution, p(l) ∼l-μ, known as Lévy flights. We find that compared to random searches on two-dimensional (2d) lattices, the optimal exponent μopt on 3d lattices is relatively smaller in non-destructive case and remains similar in destructive case. We also find μopt decreases as the lattice length in z direction increases under high target density. Our findings may help us to understand the role of spatial dimension in search behaviors.
Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks
NASA Astrophysics Data System (ADS)
Gong, Yubing; Xu, Bo; Wu, Ya'nan
2013-09-01
In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.
Self-deployable mobile sensor networks for on-demand surveillance
NASA Astrophysics Data System (ADS)
Miao, Lidan; Qi, Hairong; Wang, Feiyi
2005-05-01
This paper studies two interconnected problems in mobile sensor network deployment, the optimal placement of heterogeneous mobile sensor platforms for cost-efficient and reliable coverage purposes, and the self-organizable deployment. We first develop an optimal placement algorithm based on a "mosaicked technology" such that different types of mobile sensors form a mosaicked pattern uniquely determined by the popularity of different types of sensor nodes. The initial state is assumed to be random. In order to converge to the optimal state, we investigate the swarm intelligence (SI)-based sensor movement strategy, through which the randomly deployed sensors can self-organize themselves to reach the optimal placement state. The proposed algorithm is compared with the random movement and the centralized method using performance metrics such as network coverage, convergence time, and energy consumption. Simulation results are presented to demonstrate the effectiveness of the mosaic placement and the SI-based movement.
Optimal design of aperiodic, vertical silicon nanowire structures for photovoltaics.
Lin, Chenxi; Povinelli, Michelle L
2011-09-12
We design a partially aperiodic, vertically-aligned silicon nanowire array that maximizes photovoltaic absorption. The optimal structure is obtained using a random walk algorithm with transfer matrix method based electromagnetic forward solver. The optimal, aperiodic structure exhibits a 2.35 times enhancement in ultimate efficiency compared to its periodic counterpart. The spectral behavior mimics that of a periodic array with larger lattice constant. For our system, we find that randomly-selected, aperiodic structures invariably outperform the periodic array.
Self-duality and phase structure of the 4D random-plaquette Z2 gauge model
NASA Astrophysics Data System (ADS)
Arakawa, Gaku; Ichinose, Ikuo; Matsui, Tetsuo; Takeda, Koujin
2005-03-01
In the present paper, we shall study the 4-dimensional Z lattice gauge model with a random gauge coupling; the random-plaquette gauge model (RPGM). The random gauge coupling at each plaquette takes the value J with the probability 1-p and - J with p. This model exhibits a confinement-Higgs phase transition. We numerically obtain a phase boundary curve in the (p-T)-plane where T is the "temperature" measured in unit of J/k. This model plays an important role in estimating the accuracy threshold of a quantum memory of a toric code. In this paper, we are mainly interested in its "self-duality" aspect, and the relationship with the random-bond Ising model (RBIM) in 2-dimensions. The "self-duality" argument can be applied both for RPGM and RBIM, giving the same duality equations, hence predicting the same phase boundary. The phase boundary curve obtained by our numerical simulation almost coincides with this predicted phase boundary at the high-temperature region. The phase transition is of first order for relatively small values of p<0.08, but becomes of second order for larger p. The value of p at the intersection of the phase boundary curve and the Nishimori line is regarded as the accuracy threshold of errors in a toric quantum memory. It is estimated as p=0.110±0.002, which is very close to the value conjectured by Takeda and Nishimori through the "self-duality" argument.
Color image encryption based on gyrator transform and Arnold transform
NASA Astrophysics Data System (ADS)
Sui, Liansheng; Gao, Bo
2013-06-01
A color image encryption scheme using gyrator transform and Arnold transform is proposed, which has two security levels. In the first level, the color image is separated into three components: red, green and blue, which are normalized and scrambled using the Arnold transform. The green component is combined with the first random phase mask and transformed to an interim using the gyrator transform. The first random phase mask is generated with the sum of the blue component and a logistic map. Similarly, the red component is combined with the second random phase mask and transformed to three-channel-related data. The second random phase mask is generated with the sum of the phase of the interim and an asymmetrical tent map. In the second level, the three-channel-related data are scrambled again and combined with the third random phase mask generated with the sum of the previous chaotic maps, and then encrypted into a gray scale ciphertext. The encryption result has stationary white noise distribution and camouflage property to some extent. In the process of encryption and decryption, the rotation angle of gyrator transform, the iterative numbers of Arnold transform, the parameters of the chaotic map and generated accompanied phase function serve as encryption keys, and hence enhance the security of the system. Simulation results and security analysis are presented to confirm the security, validity and feasibility of the proposed scheme.
Optimized parameter estimation in the presence of collective phase noise
NASA Astrophysics Data System (ADS)
Altenburg, Sanah; Wölk, Sabine; Tóth, Géza; Gühne, Otfried
2016-11-01
We investigate phase and frequency estimation with different measurement strategies under the effect of collective phase noise. First, we consider the standard linear estimation scheme and present an experimentally realizable optimization of the initial probe states by collective rotations. We identify the optimal rotation angle for different measurement times. Second, we show that subshot noise sensitivity—up to the Heisenberg limit—can be reached in presence of collective phase noise by using differential interferometry, where one part of the system is used to monitor the noise. For this, not only Greenberger-Horne-Zeilinger states but also symmetric Dicke states are suitable. We investigate the optimal splitting for a general symmetric Dicke state at both inputs and discuss possible experimental realizations of differential interferometry.
Calculation of a double reactive azeotrope using stochastic optimization approaches
NASA Astrophysics Data System (ADS)
Mendes Platt, Gustavo; Pinheiro Domingos, Roberto; Oliveira de Andrade, Matheus
2013-02-01
An homogeneous reactive azeotrope is a thermodynamic coexistence condition of two phases under chemical and phase equilibrium, where compositions of both phases (in the Ung-Doherty sense) are equal. This kind of nonlinear phenomenon arises from real world situations and has applications in chemical and petrochemical industries. The modeling of reactive azeotrope calculation is represented by a nonlinear algebraic system with phase equilibrium, chemical equilibrium and azeotropy equations. This nonlinear system can exhibit more than one solution, corresponding to a double reactive azeotrope. The robust calculation of reactive azeotropes can be conducted by several approaches, such as interval-Newton/generalized bisection algorithms and hybrid stochastic-deterministic frameworks. In this paper, we investigate the numerical aspects of the calculation of reactive azeotropes using two metaheuristics: the Luus-Jaakola adaptive random search and the Firefly algorithm. Moreover, we present results for a system (with industrial interest) with more than one azeotrope, the system isobutene/methanol/methyl-tert-butyl-ether (MTBE). We present convergence patterns for both algorithms, illustrating - in a bidimensional subdomain - the identification of reactive azeotropes. A strategy for calculation of multiple roots in nonlinear systems is also applied. The results indicate that both algorithms are suitable and robust when applied to reactive azeotrope calculations for this "challenging" nonlinear system.
Wang, Y; Harrison, M; Clark, B J
2006-02-10
An optimization strategy for the separation of an acidic mixture by employing a monolithic stationary phase is presented, with the aid of experimental design and response surface methodology (RSM). An orthogonal array design (OAD) OA(16) (2(15)) was used to choose the significant parameters for the optimization. The significant factors were optimized by using a central composite design (CCD) and the quadratic models between the dependent and the independent parameters were built. The mathematical models were tested on a number of simulated data set and had a coefficient of R(2) > 0.97 (n = 16). On applying the optimization strategy, the factor effects were visualized as three-dimensional (3D) response surfaces and contour plots. The optimal condition was achieved in less than 40 min by using the monolithic packing with the mobile phase of methanol/20 mM phosphate buffer pH 2.7 (25.5/74.5, v/v). The method showed good agreement between the experimental data and predictive value throughout the studied parameter space and were suitable for optimization studies on the monolithic stationary phase for acidic compounds.
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
2016-02-01
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munoz, F. D.; Hobbs, B. F.; Watson, J. -P.
A novel two-phase bounding and decomposition approach to compute optimal and near-optimal solutions to large-scale mixed-integer investment planning problems is proposed and it considers a large number of operating subproblems, each of which is a convex optimization. Our motivating application is the planning of power transmission and generation in which policy constraints are designed to incentivize high amounts of intermittent generation in electric power systems. The bounding phase exploits Jensen’s inequality to define a lower bound, which we extend to stochastic programs that use expected-value constraints to enforce policy objectives. The decomposition phase, in which the bounds are tightened, improvesmore » upon the standard Benders’ algorithm by accelerating the convergence of the bounds. The lower bound is tightened by using a Jensen’s inequality-based approach to introduce an auxiliary lower bound into the Benders master problem. Upper bounds for both phases are computed using a sub-sampling approach executed on a parallel computer system. Numerical results show that only the bounding phase is necessary if loose optimality gaps are acceptable. But, the decomposition phase is required to attain optimality gaps. Moreover, use of both phases performs better, in terms of convergence speed, than attempting to solve the problem using just the bounding phase or regular Benders decomposition separately.« less
Pivot methods for global optimization
NASA Astrophysics Data System (ADS)
Stanton, Aaron Fletcher
A new algorithm is presented for the location of the global minimum of a multiple minima problem. It begins with a series of randomly placed probes in phase space, and then uses an iterative redistribution of the worst probes into better regions of phase space until a chosen convergence criterion is fulfilled. The method quickly converges, does not require derivatives, and is resistant to becoming trapped in local minima. Comparison of this algorithm with others using a standard test suite demonstrates that the number of function calls has been decreased conservatively by a factor of about three with the same degrees of accuracy. Two major variations of the method are presented, differing primarily in the method of choosing the probes that act as the basis for the new probes. The first variation, termed the lowest energy pivot method, ranks all probes by their energy and keeps the best probes. The probes being discarded select from those being kept as the basis for the new cycle. In the second variation, the nearest neighbor pivot method, all probes are paired with their nearest neighbor. The member of each pair with the higher energy is relocated in the vicinity of its neighbor. Both methods are tested against a standard test suite of functions to determine their relative efficiency, and the nearest neighbor pivot method is found to be the more efficient. A series of Lennard-Jones clusters is optimized with the nearest neighbor method, and a scaling law is found for cpu time versus the number of particles in the system. The two methods are then compared more explicitly, and finally a study in the use of the pivot method for solving the Schroedinger equation is presented. The nearest neighbor method is found to be able to solve the ground state of the quantum harmonic oscillator from a pure random initialization of the wavefunction.
Phase diagram of matrix compressed sensing
NASA Astrophysics Data System (ADS)
Schülke, Christophe; Schniter, Philip; Zdeborová, Lenka
2016-12-01
In the problem of matrix compressed sensing, we aim to recover a low-rank matrix from a few noisy linear measurements. In this contribution, we analyze the asymptotic performance of a Bayes-optimal inference procedure for a model where the matrix to be recovered is a product of random matrices. The results that we obtain using the replica method describe the state evolution of the Parametric Bilinear Generalized Approximate Message Passing (P-BiG-AMP) algorithm, recently introduced in J. T. Parker and P. Schniter [IEEE J. Select. Top. Signal Process. 10, 795 (2016), 10.1109/JSTSP.2016.2539123]. We show the existence of two different types of phase transition and their implications for the solvability of the problem, and we compare the results of our theoretical analysis to the numerical performance reached by P-BiG-AMP. Remarkably, the asymptotic replica equations for matrix compressed sensing are the same as those for a related but formally different problem of matrix factorization.
Optimism bias leads to inconclusive results - an empirical study
Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T.; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J.
2010-01-01
Objective Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully, and explored whether poor accrual or optimism bias is responsible for inconclusive results. Study Design Systematic review Setting Retrospective analysis of a consecutive series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Results 359 trials (374 comparisons) enrolling 150,232 patients were analyzed. 70% (262/374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273/374) of studies. Investigators’ judgments and statistical inferences were concordant in 75% (279/374) of trials. Investigators consistently overestimated their expected treatment effects, but to a significantly larger extent for inconclusive trials. The median ratio of expected over observed hazard ratio or odds ratio was 1.34 (range 0.19 – 15.40) in conclusive trials compared to 1.86 (range 1.09 – 12.00) in inconclusive studies (p<0.0001). Only 17% of the trials had treatment effects that matched original researchers’ expectations. Conclusion Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. PMID:21163620
Optimism bias leads to inconclusive results-an empirical study.
Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J
2011-06-01
Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully and explored whether poor accrual or optimism bias is responsible for inconclusive results. Systematic review. Retrospective analysis of a consecutive-series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Three hundred fifty-nine trials (374 comparisons) enrolling 150,232 patients were analyzed. Seventy percent (262 of 374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273 of 374) of studies. Investigators' judgments and statistical inferences were concordant in 75% (279 of 374) of trials. Investigators consistently overestimated their expected treatment effects but to a significantly larger extent for inconclusive trials. The median ratio of expected and observed hazard ratio or odds ratio was 1.34 (range: 0.19-15.40) in conclusive trials compared with 1.86 (range: 1.09-12.00) in inconclusive studies (P<0.0001). Only 17% of the trials had treatment effects that matched original researchers' expectations. Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. Copyright © 2011 Elsevier Inc. All rights reserved.
Spatial Distribution of Phase Singularities in Optical Random Vector Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2016-08-26
Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.
Implementation of a quantum random number generator based on the optimal clustering of photocounts
NASA Astrophysics Data System (ADS)
Balygin, K. A.; Zaitsev, V. I.; Klimov, A. N.; Kulik, S. P.; Molotkov, S. N.
2017-10-01
To implement quantum random number generators, it is fundamentally important to have a mathematically provable and experimentally testable process of measurements of a system from which an initial random sequence is generated. This makes sure that randomness indeed has a quantum nature. A quantum random number generator has been implemented with the use of the detection of quasi-single-photon radiation by a silicon photomultiplier (SiPM) matrix, which makes it possible to reliably reach the Poisson statistics of photocounts. The choice and use of the optimal clustering of photocounts for the initial sequence of photodetection events and a method of extraction of a random sequence of 0's and 1's, which is polynomial in the length of the sequence, have made it possible to reach a yield rate of 64 Mbit/s of the output certainly random sequence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, S; Kim, D; Kim, T
2016-06-15
Purpose: To propose a simple and effective cost value function to search optimal planning phase (gating window) and demonstrated its feasibility for respiratory correlated radiation therapy. Methods: We acquired 4DCT of 10 phases for 10 lung patients who have tumor located near OARs such as esophagus, heart, and spinal cord (i.e., central lung cancer patients). A simplified mathematical optimization function was established by using overlap volume histogram (OVH) between the target and organ at risk (OAR) at each phase and the tolerance dose of selected OARs to achieve surrounding OARs dose-sparing. For all patients and all phases, delineation of themore » target volume and selected OARs (esophagus, heart, and spinal cord) was performed (by one observer to avoid inter-observer variation), then cost values were calculated for all phases. After the breathing phases were ranked according to cost value function, the relationship between score and dose distribution at highest and lowest cost value phases were evaluated by comparing the mean/max dose. Results: A simplified mathematical cost value function showed noticeable difference from phase to phase, implying it is possible to find optimal phases for gating window. The lowest cost value which may result in lower mean/max dose to OARs was distributed at various phases for all patients. The mean doses of the OARs significantly decreased about 10% with statistical significance for all 3 OARs at the phase with the lowest cost value. Also, the max doses of the OARs were decreased about 2∼5% at the phase with the lowest cost value compared to the phase with the highest cost value. Conclusion: It is demonstrated that optimal phases (in dose distribution perspective) for gating window could exist differently through each patient and the proposed cost value function can be a useful tool for determining such phases without performing dose optimization calculations. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
Hybrid-drive implosion system for ICF targets
Mark, James W.
1988-08-02
Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.
Hybrid-drive implosion system for ICF targets
Mark, James W.
1988-01-01
Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.
Hybrid-drive implosion system for ICF targets
Mark, J.W.K.
1987-10-14
Hybrid-drive implosion systems for ICF targets are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator surroundingly disposed around fusion fuel. The ablator is first compressed to higher density by a laser system, or by an ion beam system, that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system that is optimized for this second phase of operation of the target. The fusion fuel is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion. 3 figs.
Optimizing phase to enhance optical trap stiffness.
Taylor, Michael A
2017-04-03
Phase optimization offers promising capabilities in optical tweezers, allowing huge increases in the applied forces, trap stiff-ness, or measurement sensitivity. One key obstacle to potential applications is the lack of an efficient algorithm to compute an optimized phase profile, with enhanced trapping experiments relying on slow programs that would take up to a week to converge. Here we introduce an algorithm that reduces the wait from days to minutes. We characterize the achievable in-crease in trap stiffness and its dependence on particle size, refractive index, and optical polarization. We further show that phase-only control can achieve almost all of the enhancement possible with full wavefront shaping; for instance phase control allows 62 times higher trap stiffness for 10 μm silica spheres in water, while amplitude control and non-trivial polarization further increase this by 1.26 and 1.01 respectively. This algorithm will facilitate future applications in optical trapping, and more generally in wavefront optimization.
Autonomous Modeling, Statistical Complexity and Semi-annealed Treatment of Boolean Networks
NASA Astrophysics Data System (ADS)
Gong, Xinwei
This dissertation presents three studies on Boolean networks. Boolean networks are a class of mathematical systems consisting of interacting elements with binary state variables. Each element is a node with a Boolean logic gate, and the presence of interactions between any two nodes is represented by directed links. Boolean networks that implement the logic structures of real systems are studied as coarse-grained models of the real systems. Large random Boolean networks are studied with mean field approximations and used to provide a baseline of possible behaviors of large real systems. This dissertation presents one study of the former type, concerning the stable oscillation of a yeast cell-cycle oscillator, and two studies of the latter type, respectively concerning the statistical complexity of large random Boolean networks and an extension of traditional mean field techniques that accounts for the presence of short loops. In the cell-cycle oscillator study, a novel autonomous update scheme is introduced to study the stability of oscillations in small networks. A motif that corrects pulse-growing perturbations and a motif that grows pulses are identified. A combination of the two motifs is capable of sustaining stable oscillations. Examining a Boolean model of the yeast cell-cycle oscillator using an autonomous update scheme yields evidence that it is endowed with such a combination. Random Boolean networks are classified as ordered, critical or disordered based on their response to small perturbations. In the second study, random Boolean networks are taken as prototypical cases for the evaluation of two measures of complexity based on a criterion for optimal statistical prediction. One measure, defined for homogeneous systems, does not distinguish between the static spatial inhomogeneity in the ordered phase and the dynamical inhomogeneity in the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing complexity values for networks in the ordered and critical phases and for highly disordered networks, peaking somewhere in the disordered phase. Individual nodes with high complexity have, on average, a larger influence on the system dynamics. Lastly, a semi-annealed approximation that preserves the correlation between states at neighboring nodes is introduced to study a social game-inspired network model in which all links are bidirectional and all nodes have a self-input. The technique developed here is shown to yield accurate predictions of distribution of players' states, and accounts for some nontrivial collective behavior of game theoretic interest.
NASA Astrophysics Data System (ADS)
Wang, Jun; Li, Xiaowei; Hu, Yuhen; Wang, Qiong-Hua
2018-03-01
A phase-retrieval attack free cryptosystem based on the cylindrical asymmetric diffraction and double-random phase encoding (DRPE) is proposed. The plaintext is abstract as a cylinder, while the observed diffraction and holographic surfaces are concentric cylinders. Therefore, the plaintext can be encrypted through a two-step asymmetric diffraction process with double pseudo random phase masks located on the object surface and the first diffraction surface. After inverse diffraction from a holographic surface to an object surface, the plaintext can be reconstructed using a decryption process. Since the diffraction propagated from the inner cylinder to the outer cylinder is different from that of the reversed direction, the proposed cryptosystem is asymmetric and hence is free of phase-retrieval attack. Numerical simulation results demonstrate the flexibility and effectiveness of the proposed cryptosystem.
A practical approach to automate randomized design of experiments for ligand-binding assays.
Tsoi, Jennifer; Patel, Vimal; Shih, Judy
2014-03-01
Design of experiments (DOE) is utilized in optimizing ligand-binding assay by modeling factor effects. To reduce the analyst's workload and error inherent with DOE, we propose the integration of automated liquid handlers to perform the randomized designs. A randomized design created from statistical software was imported into custom macro converting the design into a liquid-handler worklist to automate reagent delivery. An optimized assay was transferred to a contract research organization resulting in a successful validation. We developed a practical solution for assay optimization by integrating DOE and automation to increase assay robustness and enable successful method transfer. The flexibility of this process allows it to be applied to a variety of assay designs.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Rini, B I; Melichar, B; Fishman, M N; Oya, M; Pithavala, Y K; Chen, Y; Bair, A H; Grünwald, V
2015-07-01
In a randomized, double-blind phase II trial in patients with metastatic renal cell carcinoma (mRCC), axitinib versus placebo titration yielded a significantly higher objective response rate. We evaluated pharmacokinetic and blood pressure (BP) data from this study to elucidate relationships among axitinib exposure, BP change, and efficacy. Patients received axitinib 5 mg twice daily during a lead-in period. Patients who met dose-titration criteria were randomized 1:1 to stepwise dose increases with axitinib or placebo. Patients ineligible for randomization continued without dose increases. Serial 6-h and sparse pharmacokinetic sampling were carried out; BP was measured at clinic visits and at home in all patients, and by 24-h ambulatory BP monitoring (ABPM) in a subset of patients. Area under the plasma concentration-time curve from 0 to 24 h throughout the course of treatment (AUCstudy) was higher in patients with complete or partial responses than those with stable or progressive disease in the axitinib-titration arm, but comparable between these groups in the placebo-titration and nonrandomized arms. In the overall population, AUCstudy and efficacy outcomes were not strongly correlated. Mean BP across the population was similar when measured in clinic, at home, or by 24-h ABPM. Weak correlations were observed between axitinib steady-state exposure and diastolic BP. When grouped by change in diastolic BP from baseline, patients in the ≥10 and ≥15 mmHg groups had longer progression-free survival. Optimal axitinib exposure may differ among patients with mRCC. Pharmacokinetic or BP measurements cannot be used exclusively to guide axitinib dosing. Individualization of treatment with vascular endothelial growth factor receptor tyrosine kinase inhibitors, including axitinib, is thus more complex than anticipated and cannot be limited to a single clinical factor. © The Author 2015. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying
2015-04-01
In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.
NASA Astrophysics Data System (ADS)
Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.
2011-10-01
Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.
Phase Transitions on Random Lattices: How Random is Topological Disorder?
NASA Astrophysics Data System (ADS)
Barghathi, Hatem; Vojta, Thomas
2015-03-01
We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω = (d - 1) / (2 d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d + 1) ν > 2 rather than the usual Harris criterion dν > 2 , making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d > 1 . These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. This work was supported by the NSF under Grant Nos. DMR-1205803 and PHYS-1066293. We acknowledge the hospitality of the Aspen Center for Physics.
Epidemic spreading on random surfer networks with optimal interaction radius
NASA Astrophysics Data System (ADS)
Feng, Yun; Ding, Li; Hu, Ping
2018-03-01
In this paper, the optimal control problem of epidemic spreading on random surfer heterogeneous networks is considered. An epidemic spreading model is established according to the classification of individual's initial interaction radii. Then, a control strategy is proposed based on adjusting individual's interaction radii. The global stability of the disease free and endemic equilibrium of the model is investigated. We prove that an optimal solution exists for the optimal control problem and the explicit form of which is presented. Numerical simulations are conducted to verify the correctness of the theoretical results. It is proved that the optimal control strategy is effective to minimize the density of infected individuals and the cost associated with the adjustment of interaction radii.
Optimizing event selection with the random grid search
NASA Astrophysics Data System (ADS)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip
2018-07-01
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.
NASA Astrophysics Data System (ADS)
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-01
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.
Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.
Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk
2017-01-01
Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.
Emergence of an optimal search strategy from a simple random walk
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-01-01
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths. PMID:23804445
Emergence of an optimal search strategy from a simple random walk.
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2013-09-06
In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.
Fleischhacker, W Wolfgang; Heikkinen, Martti E; Olié, Jean-Pierre; Landsberg, Wally; Dewaele, Patricia; McQuade, Robert D; Loze, Jean-Yves; Hennicken, Delphine; Kerselaers, Wendy
2010-09-01
Clozapine is associated with significant weight gain and metabolic disturbances. This multicentre, randomized study comprised a double-blind, placebo-controlled treatment phase of 16 wk, and an open-label extension phase of 12 wk. Outpatients who met DSM-IV-TR criteria for schizophrenia, who were not optimally controlled while on stable dosage of clozapine for > or =3 months and had experienced weight gain of > or =2.5 kg while taking clozapine, were randomized (n=207) to aripiprazole at 5-15 mg/d or placebo, in addition to a stable dose of clozapine. The primary endpoint was mean change from baseline in body weight at week 16 (last observation carried forward). Secondary endpoints included clinical efficacy, body mass index (BMI) and waist circumference. A statistically significant difference in weight loss was reported for aripiprazole vs. placebo (-2.53 kg vs. -0.38 kg, respectively, difference=-2.15 kg, p<0.001). Aripiprazole-treated patients also showed BMI (median reduction 0.8 kg/m(2)) and waist circumference reduction (median reduction 2.0 cm) vs. placebo (no change in either parameter, p<0.001 and p=0.001, respectively). Aripiprazole-treated patients had significantly greater reductions in total and low-density lipoprotein (LDL) cholesterol. There were no significant differences in Positive and Negative Syndrome Scale total score changes between groups but Clinical Global Impression Improvement and Investigator's Assessment Questionnaire scores favoured aripiprazole over placebo. Safety and tolerability were generally comparable between groups. Combining aripiprazole and clozapine resulted in significant weight, BMI and fasting cholesterol benefits to patients suboptimally treated with clozapine. Improvements may reduce metabolic risk factors associated with clozapine treatment.
Taher, Ali T; Origa, Raffaella; Perrotta, Silverio; Kourakli, Alexandra; Ruffo, Giovan Battista; Kattamis, Antonis; Goh, Ai-Sim; Cortoos, Annelore; Huang, Vicky; Weill, Marine; Merino Herranz, Raquel; Porter, John B
2017-05-01
Once-daily deferasirox dispersible tablets (DT) have a well-defined safety and efficacy profile and, compared with parenteral deferoxamine, provide greater patient adherence, satisfaction, and quality of life. However, barriers still exist to optimal adherence, including gastrointestinal tolerability and palatability, leading to development of a new film-coated tablet (FCT) formulation that can be swallowed with a light meal, without the need to disperse into a suspension prior to consumption. The randomized, open-label, phase II ECLIPSE study evaluated the safety of deferasirox DT and FCT formulations over 24 weeks in chelation-naïve or pre-treated patients aged ≥10 years, with transfusion-dependent thalassemia or IPSS-R very-low-, low-, or intermediate-risk myelodysplastic syndromes. One hundred seventy-three patients were randomized 1:1 to DT (n = 86) or FCT (n = 87). Adverse events (overall), consistent with the known deferasirox safety profile, were reported in similar proportions of patients for each formulation (DT 89.5%; FCT 89.7%), with a lower frequency of severe events observed in patients receiving FCT (19.5% vs. 25.6% DT). Laboratory parameters (serum creatinine, creatinine clearance, alanine aminotransferase, aspartate aminotransferase and urine protein/creatinine ratio) generally remained stable throughout the study. Patient-reported outcomes showed greater adherence and satisfaction, better palatability and fewer concerns with FCT than DT. Treatment compliance by pill count was higher with FCT (92.9%) than with DT (85.3%). This analysis suggests deferasirox FCT offers an improved formulation with enhanced patient satisfaction, which may improve adherence, thereby reducing frequency and severity of iron overload-related complications. © 2017 Wiley Periodicals, Inc.
Min, Jie; Li, Xiao-qiang; She, Bin; Chen, Yan; Mao, Bing
2015-05-19
Although the common cold is generally mild and self-limiting, it is a leading cause of consultations with doctors and missed days from school and work. In light of its favorable effects of relieving symptoms and minimal side-effects, Traditional Chinese Medicine (TCM) has been widely used to treat the common cold. However, there is a lack of robust evidence to support the clinical utility of such a treatment. This study is designed to evaluate the efficacy and safety of Gantong Granules compared with placebo in patients with the common cold with wind-heat syndrome (CCWHS). This is a multicenter, phase IIb, double-blind, placebo-controlled and randomized clinical trial. A total of 240 patients will be recruited, from 5 centers across China and randomly assigned to the high-dose group, medium-dose group, low-dose group or placebo control group in a 1:1:1:1 ratio. All subjects will receive the treatment for 3 to 5 days, followed by a 7-day follow-up period. The primary outcome is the duration of all symptoms. Secondary outcomes include the duration of primary symptoms and each symptom, time to fever relief and time to fever clearance, change in TCM symptom score, and change in Symptom and Sign Score. This trial will provide high-quality evidence on the efficacy and safety of Gantong Granules in treating CCWHS, and help to optimize the dose selection for a phase III clinical trial. The registration number is ChiCTR-TRC-14004255 , which was assigned by the Chinese Clinical Trial Registry on 12 February 2014.
Culine, Stéphane; Fléchon, Aude; Guillot, Aline; Le Moulec, Sylvestre; Pouessel, Damien; Rolland, Frédéric; Ravaud, Alain; Houédé, Nadine; Mignot, Laurent; Joly, Florence; Oudard, Stéphane; Gourgou, Sophie
2011-12-01
The optimal chemotherapy for patients with advanced transitional cell carcinoma of the urothelium who are not eligible for cisplatin remains to be defined. To assess the activity of gemcitabine alone (GEM) or in combination with oxaliplatin (GEMOX) in a randomized phase 2 trial. The primary end point was the objective response rate according to Response Evaluation Criteria in Solid Tumors criteria. The sample size was based on a two-stage Fleming design with p0=35% and p1=55%. At the end of the first stage designed to register 20 patients on each treatment arm, the observation of seven or more objective responses would have led to the inclusion of 30 more patients in each arm. From July 2004 to March 2009, 44 patients in 10 centers were randomly assigned into the GEM or the GEMOX arm, 22 on each treatment arm. The median age was 76 yr. Seven patients were included for a performance status (PS) of 2 only. The remaining 37 patients had an impaired renal function, 11 of whom also had a PS of 2. The median creatinine clearance was 45 ml/min (range: 30-80 ml/min). The trial was closed after the first part because the GEMOX arm did not reach the targeted objective response rate to proceed further. Oxaliplatin does not add any significant activity (in terms of response rates) compared with gemcitabine alone in patients with advanced transitional cell carcinoma of the urothelium who are ineligible for cisplatin. Copyright © 2011 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Del Campo, J M; Roszak, A; Bidzinski, M; Ciuleanu, T E; Hogberg, T; Wojtukiewicz, M Z; Poveda, A; Boman, K; Westermann, A M; Lebedinsky, C
2009-11-01
This randomized, open-label, phase II clinical trial evaluated the optimal regimen of trabectedin administered every 3 weeks in patients with platinum-sensitive, relapsed, advanced ovarian cancer (AOC). Patients previously treated with less than two or two previous chemotherapy lines were randomized to receive trabectedin 1.5 mg/m(2) 24 h (arm A, n = 54) or 1.3 mg/m(2) 3 h (arm B, n = 53). Objective response rate (ORR) per RECIST was the primary efficacy end point. Toxic effects were graded according to the National Cancer Institute-Common Toxicity Criteria v. 2.0. ORR was 38.9% [95% confidence interval (CI) 25.9% to 53.1%; arm A] and 35.8% (95% CI 23.1% to 50.2%; arm B) (intention-to-treat primary analysis). Median time to progression was 6.2 months (95% CI 5.3-8.6 months; arm A) and 6.8 months (95% CI 4.6-7.4 months; arm B). Frequent severe adverse events were nausea/vomiting (24%, arm A; 15%, arm B) and fatigue (15%, arm A; 10%, arm B). Common severe laboratory abnormalities were transient, noncumulative neutropenia (55%, arm A; 37%, arm B) and transaminase increases (alanine aminotransferase, 55%, arm A; 59%, arm B). Both every-3-weeks trabectedin regimes, 1.5 mg/m(2) 24 h and 1.3 mg/m(2) 3 h, were active and reasonably well tolerated in AOC platinum-sensitive patients. Trabectedin every-3-weeks has promising activity and deserves to be further evaluated in relapsed AOC.
Solheim, Tora S; Laird, Barry J A; Balstad, Trude Rakel; Stene, Guro B; Bye, Asta; Johns, Neil; Pettersen, Caroline H; Fallon, Marie; Fayers, Peter; Fearon, Kenneth; Kaasa, Stein
2017-10-01
Cancer cachexia is a syndrome of weight loss (including muscle and fat), anorexia, and decreased physical function. It has been suggested that the optimal treatment for cachexia should be a multimodal intervention. The primary aim of this study was to examine the feasibility and safety of a multimodal intervention (n-3 polyunsaturated fatty acid nutritional supplements, exercise, and anti-inflammatory medication: celecoxib) for cancer cachexia in patients with incurable lung or pancreatic cancer, undergoing chemotherapy. Patients receiving two cycles of standard chemotherapy were randomized to either the multimodal cachexia intervention or standard care. Primary outcome measures were feasibility assessed by recruitment, attrition, and compliance with intervention (>50% of components in >50% of patients). Key secondary outcomes were change in weight, muscle mass, physical activity, safety, and survival. Three hundred and ninety-nine were screened resulting in 46 patients recruited (11.5%). Twenty five patients were randomized to the treatment and 21 as controls. Forty-one completed the study (attrition rate 11%). Compliance to the individual components of the intervention was 76% for celecoxib, 60% for exercise, and 48% for nutritional supplements. As expected from the sample size, there was no statistically significant effect on physical activity or muscle mass. There were no intervention-related Serious Adverse Events and survival was similar between the groups. A multimodal cachexia intervention is feasible and safe in patients with incurable lung or pancreatic cancer; however, compliance to nutritional supplements was suboptimal. A phase III study is now underway to assess fully the effect of the intervention. © 2017 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf of the Society on Sarcopenia, Cachexia and Wasting Disorders.
A model of optimal voluntary muscular control.
FitzHugh, R
1977-07-19
In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.
Phase transitions in Pareto optimal complex networks
NASA Astrophysics Data System (ADS)
Seoane, Luís F.; Solé, Ricard
2015-09-01
The organization of interactions in complex systems can be described by networks connecting different units. These graphs are useful representations of the local and global complexity of the underlying systems. The origin of their topological structure can be diverse, resulting from different mechanisms including multiplicative processes and optimization. In spatial networks or in graphs where cost constraints are at work, as it occurs in a plethora of situations from power grids to the wiring of neurons in the brain, optimization plays an important part in shaping their organization. In this paper we study network designs resulting from a Pareto optimization process, where different simultaneous constraints are the targets of selection. We analyze three variations on a problem, finding phase transitions of different kinds. Distinct phases are associated with different arrangements of the connections, but the need of drastic topological changes does not determine the presence or the nature of the phase transitions encountered. Instead, the functions under optimization do play a determinant role. This reinforces the view that phase transitions do not arise from intrinsic properties of a system alone, but from the interplay of that system with its external constraints.
Persistence and Lifelong Fidelity of Phase Singularities in Optical Random Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2017-11-17
Phase singularities are locations where light is twisted like a corkscrew, with positive or negative topological charge depending on the twisting direction. Among the multitude of singularities arising in random wave fields, some can be found at the same location, but only when they exhibit opposite topological charge, which results in their mutual annihilation. New pairs can be created as well. With near-field experiments supported by theory and numerical simulations, we study the persistence and pairing statistics of phase singularities in random optical fields as a function of the excitation wavelength. We demonstrate how such entities can encrypt fundamental properties of the random fields in which they arise.
Persistence and Lifelong Fidelity of Phase Singularities in Optical Random Waves
NASA Astrophysics Data System (ADS)
De Angelis, L.; Alpeggiani, F.; Di Falco, A.; Kuipers, L.
2017-11-01
Phase singularities are locations where light is twisted like a corkscrew, with positive or negative topological charge depending on the twisting direction. Among the multitude of singularities arising in random wave fields, some can be found at the same location, but only when they exhibit opposite topological charge, which results in their mutual annihilation. New pairs can be created as well. With near-field experiments supported by theory and numerical simulations, we study the persistence and pairing statistics of phase singularities in random optical fields as a function of the excitation wavelength. We demonstrate how such entities can encrypt fundamental properties of the random fields in which they arise.
Phase retrieval via incremental truncated amplitude flow algorithm
NASA Astrophysics Data System (ADS)
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
Chen, Yeung-Jen; Chiang, Chao-Ching; Huang, Peng-Ju; Huang, Jason; Karcher, Keith; Li, Honglan
2015-11-01
To evaluate the efficacy and safety of tapentadol immediate-release (IR) for treating acute pain following orthopedic bunionectomy surgery in a Taiwanese population. This was a phase 3, randomized, double-blind, placebo-controlled, parallel-group bridging study in which Taiwanese patients (N = 60) with moderate-to-severe pain following bunionectomy were randomized (1:1:1) to receive tapentadol IR 50 or 75 mg or placebo orally every 4-6 hours over a 72 hour period. The primary endpoint was the sum of pain intensity difference over 48 hours (SPID48), analyzed using analysis of variance. Out of 60 patients randomized (mainly women [96.7%]; median age 44 years), 41 (68.3%) completed the treatment. Mean SPID48 values were significantly higher for tapentadol IR (p ≤ 0.006: 50 mg, p ≤ 0.004: 75 mg) compared with placebo. Between-group differences in LS means of SPID48 (vs. placebo) were tapentadol IR 50 mg: 105.6 (95% CI: 32.0; 179.2); tapentadol IR 75 mg: 126.6 (95% CI: 49.5; 203.7). Secondary endpoints including SPID at 12, 24, and 72 hours, time to first use of rescue medication, cumulative distribution of responder rates, total pain relief and sum of total pain relief and sum of pain intensity difference at 12, 24, 48, and 72 hours, and patient global impression of change showed numerically better results supporting that tapentadol IR (50 and 75 mg) was more efficacious than placebo in relieving acute pain. The most frequent treatment emergent adverse events reported in ≥ 10% patients in either group were dizziness, nausea, and vomiting. A limitation of this study may possibly include more controlled patient monitoring through 4-6 hour dosing intervals, which reflects optimal conditions and thus may not approximate real-world clinical practice. However, all treatment groups would be equally affected by such bias of frequent monitoring, if any, since it was a randomized and double-blind study. Tapentadol IR treatment significantly relieved acute postoperative pain and was well tolerated in a Taiwanese population. ClinicalTrials.gov identifier: NCT01813890.
Rand, Miya K; Shimansky, Yury P
2013-03-01
A quantitative model of optimal transport-aperture coordination (TAC) during reach-to-grasp movements has been developed in our previous studies. The utilization of that model for data analysis allowed, for the first time, to examine the phase dependence of the precision demand specified by the CNS for neurocomputational information processing during an ongoing movement. It was shown that the CNS utilizes a two-phase strategy for movement control. That strategy consists of reducing the precision demand for neural computations during the initial phase, which decreases the cost of information processing at the expense of lower extent of control optimality. To successfully grasp the target object, the CNS increases precision demand during the final phase, resulting in higher extent of control optimality. In the present study, we generalized the model of optimal TAC to a model of optimal coordination between X and Y components of point-to-point planar movements (XYC). We investigated whether the CNS uses the two-phase control strategy for controlling those movements, and how the strategy parameters depend on the prescribed movement speed, movement amplitude and the size of the target area. The results indeed revealed a substantial similarity between the CNS's regulation of TAC and XYC. First, the variability of XYC within individual trials was minimal, meaning that execution noise during the movement was insignificant. Second, the inter-trial variability of XYC was considerable during the majority of the movement time, meaning that the precision demand for information processing was lowered, which is characteristic for the initial phase. That variability significantly decreased, indicating higher extent of control optimality, during the shorter final movement phase. The final phase was the longest (shortest) under the most (least) challenging combination of speed and accuracy requirements, fully consistent with the concept of the two-phase control strategy. This paper further discussed the relationship between motor variability and XYC variability.
Wilens, Timothy E; Robertson, Brigitte; Sikirica, Vanja; Harper, Linda; Young, Joel L; Bloomfield, Ralph; Lyne, Andrew; Rynkowski, Gail; Cutler, Andrew J
2015-11-01
Despite the continuity of attention-deficit/hyperactivity disorder (ADHD) into adolescence, little is known regarding use of nonstimulants to treat ADHD in adolescents. This phase 3 trial evaluated the safety and efficacy of guanfacine extended release (GXR) in adolescents with ADHD. This 13-week, multicenter, randomized, double-blind, placebo-controlled trial evaluated once-daily GXR (1-7 mg per day) in adolescents with ADHD aged 13 to 17 years. The primary endpoint was the change from baseline in the ADHD Rating Scale-IV (ADHD-RS-IV) total score; key secondary endpoints included scores from the Clinical Global Impressions-Severity of Illness (CGI-S), and Learning and School domain and Family domain scores from the Weiss Functional Impairment Rating Scale-Parent Report (WFIRS-P) at week 13. A total of 314 participants were randomized (GXR, n = 157; placebo, n = 157). The majority of participants received optimal doses of 3, 4, 5, or 6 mg (30 [22.9%], 26 [19.8%], 27 [20.6%], or 24 [18.3%] participants, respectively), with 46.5% of participants receiving an optimal dose above the currently approved maximum dose limit of 4 mg. Participants receiving GXR showed improvement in ADHD-RS-IV total score compared with placebo (least-squares mean score change, -24.55 [GXR] versus -18.53 [placebo]; effect size, 0.52; p <.001). More participants on GXR also showed significant improvement in CGI-S scores compared with placebo (50.6% versus 36.1%; p = .010). There was no statistically significant difference between treatments at week 13 in the 2 WFIRS-P domains. Most treatment-emergent adverse events were mild to moderate, with sedation-related events reported most commonly. GXR was associated with statistically significant improvements in ADHD symptoms in adolescents. GXR was well tolerated, with no new safety signals reported. Dose-Optimization in Adolescents Aged 13-17 Diagnosed With Attention-Deficit/Hyperactivity Disorder (ADHD) Using Extended-Release Guanfacine HCl; http://ClinicalTrials.gov/; NCT01081132. Copyright © 2015. Published by Elsevier Inc.
Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Carneiro, C. E. I.
1996-02-01
We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.
Subtraction method in the Second Random Phase Approximation
NASA Astrophysics Data System (ADS)
Gambacurta, Danilo
2018-02-01
We discuss the subtraction method applied to the Second Random Phase Approximation (SRPA). This method has been proposed to overcome double counting and stability issues appearing in beyond mean-field calculations. We show that the subtraction procedure leads to a considerable reduction of the SRPA downwards shift with respect to the random phase approximation (RPA) spectra and to results that are weakly cutoff dependent. Applications to the isoscalar monopole and quadrupole response in 16O and to the low-lying dipole response in 48Ca are shown and discussed.
2015-09-01
Award Number: W81XWH-09-1-0596 TITLE: A Randomized Phase 2 Trial of 177Lu Radiolabeled Anti- PSMA Monoclonal Antibody J591 in Patients With High...1-0596 A Randomized Phase 2 Trial of 177Lu Radiolabeled Anti- PSMA Monoclonal Antibody J591 in Patients With High-Risk Castrat Biochemically Relapsed...in December 2014 with approval to proceed without modifications. 15. SUBJECT TERMS Prostate cancer, PSA, PSMA , monoclonal antibody
Lichtman, Aron H; Lux, Eberhard Albert; McQuade, Robert; Rossetti, Sandro; Sanchez, Raymond; Sun, Wei; Wright, Stephen; Kornyeyeva, Elena; Fallon, Marie T
2018-02-01
Prior Phase 2/3 studies found that cannabinoids might provide adjunctive analgesia in advanced cancer patients with uncontrolled pain. To assess adjunctive nabiximols (Sativex ® ), an extract of Cannabis sativa containing two potentially therapeutic cannabinoids (Δ9-tetrahydrocannabinol [27 mg/mL] and cannabidiol [25 mg/mL]), in advanced cancer patients with chronic pain unalleviated by optimized opioid therapy. Phase 3, double-blind, randomized, placebo-controlled trial in patients with advanced cancer and average pain Numerical Rating Scale scores ≥4 and ≤8 despite optimized opioid therapy. Patients randomized to nabiximols (n = 199) or placebo (n = 198) self-titrated study medications over a two-week period, followed by a three-week treatment period at the titrated dose. Median percent improvements in average pain Numerical Rating Scale score from baseline to end of treatment in the nabiximols and placebo groups were 10.7% vs. 4.5% (P = 0.0854) in the intention-to-treat population (primary variable) and 15.5% vs. 6.3% (P = 0.0378) in the per-protocol population. Nabiximols was statistically superior to placebo on two of three quality-of-life instruments at Week 3 and on all three at Week 5. In exploratory post hoc analyses, U.S. patients, but not patients from the rest of the world, experienced significant benefits from nabiximols on multiple secondary endpoints. Possible contributing factors to differences in nabiximols efficacy include: 1) the U.S. participants received lower doses of opioids at baseline than the rest of the world and 2) the subgroups had different distribution of cancer pain types, which may have been related to differences in pathophysiology of pain. The safety profile of nabiximols was consistent with earlier studies. Although not superior to placebo on the primary efficacy endpoint, nabiximols had benefits on multiple secondary endpoints, particularly in the U.S. Nabiximols might have utility in patients with advanced cancer who receive a lower opioid dose, such as individuals with early intolerance to opioid therapy. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, X.; Zhang, G.
2013-12-01
Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides
NASA Astrophysics Data System (ADS)
Aleshkevich, Viktor A.; Kozhoridze, G. D.
1990-07-01
An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.
Visualizing and improving the robustness of phase retrieval algorithms
Tripathi, Ashish; Leyffer, Sven; Munson, Todd; ...
2015-06-01
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Visualizing and improving the robustness of phase retrieval algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; Leyffer, Sven; Munson, Todd
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Locally adaptive methods for KDE-based random walk models of reactive transport in porous media
NASA Astrophysics Data System (ADS)
Sole-Mari, G.; Fernandez-Garcia, D.
2017-12-01
Random Walk Particle Tracking (RWPT) coupled with Kernel Density Estimation (KDE) has been recently proposed to simulate reactive transport in porous media. KDE provides an optimal estimation of the area of influence of particles which is a key element to simulate nonlinear chemical reactions. However, several important drawbacks can be identified: (1) the optimal KDE method is computationally intensive and thereby cannot be used at each time step of the simulation; (2) it does not take advantage of the prior information about the physical system and the previous history of the solute plume; (3) even if the kernel is optimal, the relative error in RWPT simulations typically increases over time as the particle density diminishes by dilution. To overcome these problems, we propose an adaptive branching random walk methodology that incorporates the physics, the particle history and maintains accuracy with time. The method allows particles to efficiently split and merge when necessary as well as to optimally adapt their local kernel shape without having to recalculate the kernel size. We illustrate the advantage of the method by simulating complex reactive transport problems in randomly heterogeneous porous media.
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.
Validation of optical codes based on 3D nanostructures
NASA Astrophysics Data System (ADS)
Carnicer, Artur; Javidi, Bahram
2017-05-01
Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.
Optimization Model for Web Based Multimodal Interactive Simulations.
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-07-15
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.
Recent advances in integrated multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
Recent advances in multidisciplinary optimization of rotorcraft
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Walsh, Joanne L.; Pritchard, Jocelyn I.
1992-01-01
A joint activity involving NASA and Army researchers at NASA LaRC to develop optimization procedures to improve the rotor blade design process by integrating appropriate disciplines and accounting for all of the important interactions among the disciplines is described. The disciplines involved include rotor aerodynamics, rotor dynamics, rotor structures, airframe dynamics, and acoustics. The work is focused on combining these five key disciplines in an optimization procedure capable of designing a rotor system to satisfy multidisciplinary design requirements. Fundamental to the plan is a three-phased approach. In phase 1, the disciplines of blade dynamics, blade aerodynamics, and blade structure are closely coupled while acoustics and airframe dynamics are decoupled and are accounted for as effective constraints on the design for the first three disciplines. In phase 2, acoustics is integrated with the first three disciplines. Finally, in phase 3, airframe dynamics is integrated with the other four disciplines. Representative results from work performed to date are described. These include optimal placement of tuning masses for reduction of blade vibratory shear forces, integrated aerodynamic/dynamic optimization, and integrated aerodynamic/dynamic/structural optimization. Examples of validating procedures are described.
Optimization Model for Web Based Multimodal Interactive Simulations
Halic, Tansel; Ahn, Woojin; De, Suvranu
2015-01-01
This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713
NASA Astrophysics Data System (ADS)
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-01
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
Tang, Zhiyuan; Liao, Zhongfa; Xu, Feihu; Qi, Bing; Qian, Li; Lo, Hoi-Kwong
2014-05-16
We demonstrate the first implementation of polarization encoding measurement-device-independent quantum key distribution (MDI-QKD), which is immune to all detector side-channel attacks. Active phase randomization of each individual pulse is implemented to protect against attacks on imperfect sources. By optimizing the parameters in the decoy state protocol, we show that it is feasible to implement polarization encoding MDI-QKD with commercial off-the-shelf devices. A rigorous finite key analysis is applied to estimate the secure key rate. Our work paves the way for the realization of a MDI-QKD network, in which the users only need compact and low-cost state-preparation devices and can share complicated and expensive detectors provided by an untrusted network server.
Groves, Maria AT; Amanuel, Lily; Campbell, Jamie I; Rees, D Gareth; Sridharan, Sudharsan; Finch, Donna K; Lowe, David C; Vaughan, Tristan J
2014-01-01
In vitro selection technologies are an important means of affinity maturing antibodies to generate the optimal therapeutic profile for a particular disease target. Here, we describe the isolation of a parent antibody, KENB061 using phage display and solution phase selections with soluble biotinylated human IL-1R1. KENB061 was affinity matured using phage display and targeted mutagenesis of VH and VL CDR3 using NNS randomization. Affinity matured VHCDR3 and VLCDR3 library blocks were recombined and selected using phage and ribosome display protocol. A direct comparison of the phage and ribosome display antibodies generated was made to determine their functional characteristics. PMID:24256948
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de
A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Large leptonic Dirac CP phase from broken democracy with random perturbations
NASA Astrophysics Data System (ADS)
Ge, Shao-Feng; Kusenko, Alexander; Yanagida, Tsutomu T.
2018-06-01
A large value of the leptonic Dirac CP phase can arise from broken democracy, where the mass matrices are democratic up to small random perturbations. Such perturbations are a natural consequence of broken residual S3 symmetries that dictate the democratic mass matrices at leading order. With random perturbations, the leptonic Dirac CP phase has a higher probability to attain a value around ± π / 2. Comparing with the anarchy model, broken democracy can benefit from residual S3 symmetries, and it can produce much better, realistic predictions for the mass hierarchy, mixing angles, and Dirac CP phase in both quark and lepton sectors. Our approach provides a general framework for a class of models in which a residual symmetry determines the general features at leading order, and where, in the absence of other fundamental principles, the symmetry breaking appears in the form of random perturbations.
Simultaneous transmission for an encrypted image and a double random-phase encryption key
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Zhou, Xin; Li, Da-Hai; Zhou, Ding-Fu
2007-06-01
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
Simultaneous transmission for an encrypted image and a double random-phase encryption key.
Yuan, Sheng; Zhou, Xin; Li, Da-hai; Zhou, Ding-fu
2007-06-20
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
Han, Zhe; Pettit, Natasha N; Landon, Emily M; Brielmaier, Benjamin D
2017-04-01
Background: The impact of pharmacy interventions on optimizing vancomycin therapy has been described, however interventions vary among studies and the most optimal pharmacy practice model (PPM) for pharmacokinetic (PK) services has not been established. Objective: The purpose of this study is to demonstrate the value of 24 hours a day, 7 days a week (24/7) PK services. Methods: New PK services were implemented in 2 phases with institutional PPM expansion. Phase 1 included universal monitoring by pharmacists with recommendations made to prescribers during business hours. Phase 2 expanded clinical pharmacists' coverage to 24/7 and provided an optional 24/7 pharmacist-managed PK consult service. We compared vancomycin therapeutic trough attainment, dosing, and clinical and safety outcomes between phases 1 and 2 in adult inpatients receiving therapeutic intravenous vancomycin. Results. One hundred and fifty patients were included in each phase. Phase 2 had a greater proportion of vancomycin courses with therapeutic initial trough concentrations (27.5% vs 46.1%; p = 0.002), higher initial trough concentrations (10.9 mcg/mL vs 16.4 mcg/mL; p < 0.001), and optimized initial vancomycin dosing (13.5 mg/kg vs 16.2 mg/kg; p < 0.001). Phase 2 also saw significant reduction in the incidence of vancomycin-associated nephrotoxicity (21.1% vs 11.7%; p = 0.038). Dose optimization and improvement in initial target trough attainment were most notable among intensive care unit (ICU) patients. Conclusions. Our study demonstrated that 24/7 PK services implemented with institutional PPM expansion optimized vancomycin target trough attainment and improved patient safety.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Two-phase framework for near-optimal multi-target Lambert rendezvous
NASA Astrophysics Data System (ADS)
Bang, Jun; Ahn, Jaemyung
2018-03-01
This paper proposes a two-phase framework to obtain a near-optimal solution of multi-target Lambert rendezvous problem. The objective of the problem is to determine the minimum-cost rendezvous sequence and trajectories to visit a given set of targets within a maximum mission duration. The first phase solves a series of single-target rendezvous problems for all departure-arrival object pairs to generate the elementary solutions, which provides candidate rendezvous trajectories. The second phase formulates a variant of traveling salesman problem (TSP) using the elementary solutions prepared in the first phase and determines the final rendezvous sequence and trajectories of the multi-target rendezvous problem. The validity of the proposed optimization framework is demonstrated through an asteroid exploration case study.
Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?
Newberry, Mitchell G.; Savage, Van M.
2016-01-01
Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691
Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization.
Nishio, Mizuho; Nishizawa, Mitsuo; Sugiyama, Osamu; Kojima, Ryosuke; Yakami, Masahiro; Kuroda, Tomohiro; Togashi, Kaori
2018-01-01
We aimed to evaluate a computer-aided diagnosis (CADx) system for lung nodule classification focussing on (i) usefulness of the conventional CADx system (hand-crafted imaging feature + machine learning algorithm), (ii) comparison between support vector machine (SVM) and gradient tree boosting (XGBoost) as machine learning algorithms, and (iii) effectiveness of parameter optimization using Bayesian optimization and random search. Data on 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of the local binary pattern was used for calculating a feature vector. SVM or XGBoost was trained using the feature vector and its corresponding label. Tree Parzen Estimator (TPE) was used as Bayesian optimization for parameters of SVM and XGBoost. Random search was done for comparison with TPE. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost was 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. Bayesian optimization of SVM and XGBoost parameters was more efficient than random search. Based on observer study, AUC values of two board-certified radiologists were 0.898 and 0.822. The results show that diagnostic accuracy of our CADx system was comparable to that of radiologists with respect to classifying lung nodules.
A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.
2016-12-01
It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.
Chronis-Tuscano, Andrea; Seymour, Karen E; Stein, Mark A; Jones, Heather A; Jiles, Cynthia D; Rooney, Mary E; Conlon, Charles J; Efron, Lisa A; Wagner, Stephanie A; Pian, Jessica; Robb, Adelaide S
2008-12-01
A preliminary study to examine the efficacy of osmotic-release oral system (OROS) methylphenidate for attention-deficit/hyperactivity disorder (ADHD) symptoms and parenting behaviors in mothers with ADHD who had children with ADHD. Participants included 23 mother-child dyads in which both were diagnosed with DSM-IV ADHD. Mothers underwent a 5-week, double-blind titration (placebo, 36 mg/day, 54 mg/day, 72 mg/day, 90 mg/day) to an optimal dose of OROS methylphenidate, followed by random assignment to 2 weeks of placebo or their maximally effective dose. Primary outcome measures included maternal ADHD symptoms (Conners' Adult ADHD Rating Scale) and parenting (Alabama Parenting Questionnaire). Secondary outcomes included side effects ratings. Data were collected from December 2004 until August 2006. During Phase 1, mothers reported significant decreases in inattention (p < .001) and hyperactivity/impulsivity (p < .01) with increases in OROS methylphenidate dose. As dose increased, significant reductions in inconsistent discipline (p < .01) and corporal punishment use (p < .005) were also demonstrated. During Phase 2, small effects on inattention (d = 0.46) and hyperactivity/impulsivity (d = 0.38) were found for those randomly assigned to medication versus placebo. In addition, medium to large medication effects were found on maternal involvement (d = 0.52), poor monitoring/supervision (d = 0.70), and inconsistent discipline (d = 0.71), with small effects on corporal punishment (d = 0.42). During both phases, few adverse effects were noted. OROS methylphenidate was well tolerated and was associated with significant improvement in maternal ADHD symptoms and parenting. Variable effects on parenting suggest that behavioral interventions may be necessary to address impairments in parenting among adults with ADHD. clinicaltrials.gov Identifier: NCT00318981. Copyright 2008 Physicians Postgraduate Press, Inc.
Rationale and Design of the SENECA (StEm cell iNjECtion in cAncer survivors) Trial.
Bolli, Roberto; Hare, Joshua M; Henry, Timothy D; Lenneman, Carrie G; March, Keith L; Miller, Kathy; Pepine, Carl J; Perin, Emerson C; Traverse, Jay H; Willerson, James T; Yang, Phillip C; Gee, Adrian P; Lima, João A; Moyé, Lem; Vojvodic, Rachel W; Sayre, Shelly L; Bettencourt, Judy; Cohen, Michelle; Ebert, Ray F; Simari, Robert D
2018-07-01
SENECA (StEm cell iNjECtion in cAncer survivors) is a phase I, randomized, double-blind, placebo-controlled study to evaluate the safety and feasibility of delivering allogeneic mesenchymal stromal cells (allo-MSCs) transendocardially in subjects with anthracycline-induced cardiomyopathy (AIC). AIC is an incurable and often fatal syndrome, with a prognosis worse than that of ischemic or nonischemic cardiomyopathy. Recently, cell therapy with MSCs has emerged as a promising new approach to repair damaged myocardium. The study population is 36 cancer survivors with a diagnosis of AIC, left ventricular (LV) ejection fraction ≤40%, and symptoms of heart failure (NYHA class II-III) on optimally-tolerated medical therapy. Subjects must be clinically free of cancer for at least two years with a ≤ 30% estimated five-year risk of recurrence. The first six subjects participated in an open-label, lead-in phase and received 100 million allo-MSCs; the remaining 30 will be randomized 1:1 to receive allo-MSCs or vehicle via 20 transendocardial injections. Efficacy measures (obtained at baseline, 6 months, and 12 months) include MRI evaluation of LV function, LV volumes, fibrosis, and scar burden; assessment of exercise tolerance (six-minute walk test) and quality of life (Minnesota Living with Heart Failure Questionnaire); clinical outcomes (MACE and cumulative days alive and out of hospital); and biomarkers of heart failure (NT-proBNP). This is the first clinical trial using direct cardiac injection of cells for the treatment of AIC. If administration of allo-MSCs is found feasible and safe, SENECA will pave the way for larger phase II/III studies with therapeutic efficacy as the primary outcome. Copyright © 2018. Published by Elsevier Inc.
Computational model for vitamin D deficiency using hair mineral analysis.
Hassanien, Aboul Ella; Tharwat, Alaa; Own, Hala S
2017-10-01
Vitamin D deficiency is prevalent in the Arabian Gulf region, especially among women. Recent studies show that the vitamin D deficiency is associated with a mineral status of a patient. Therefore, it is important to assess the mineral status of the patient to reveal the hidden mineral imbalance associated with vitamin D deficiency. A well-known test such as the red blood cells is fairly expensive, invasive, and less informative. On the other hand, a hair mineral analysis can be considered an accurate, excellent, highly informative tool to measure mineral imbalance associated with vitamin D deficiency. In this study, 118 apparently healthy Kuwaiti women were assessed for their mineral levels and vitamin D status by a hair mineral analysis (HMA). This information was used to build a computerized model that would predict vitamin D deficiency based on its association with the levels and ratios of minerals. The first phase of the proposed model introduces a novel hybrid optimization algorithm, which can be considered as an improvement of Bat Algorithm (BA) to select the most discriminative features. The improvement includes using the mutation process of Genetic Algorithm (GA) to update the positions of bats with the aim of speeding up convergence; thus, making the algorithm more feasible for wider ranges of real-world applications. Due to the imbalanced class distribution in our dataset, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling, and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. In the third phase, an AdaBoost ensemble classifier is used to predicting the vitamin D deficiency. The results showed that the proposed model achieved good results to detect the deficiency in vitamin D. Copyright © 2017 Elsevier Ltd. All rights reserved.
McIntyre, Christopher W.; Pai, Pearl; Warwick, Graham; Wilkie, Martin; Toft, Alex J.; Hutchison, Alastair J.
2009-01-01
Background and objectives: This phase II study tested the safety and efficacy of fermagate, a calcium-free iron and magnesium hydroxycarbonate binder, for treating hyperphosphatemia in hemodialysis patients. Design, setting, participants, & measurements: A randomized, double-blind, three-arm, parallel-group study compared two doses of fermagate (1 g three times daily or 2 g three times daily with placebo). Sixty-three patients who had been on a stable hemodialysis regimen for ≥3 mo were randomized to the treatment phase. Study medication was administered three times daily just before meals for 21 d. The primary endpoint was reduction in serum phosphate over this period. Results: In the intention-to-treat analysis, mean baseline serum phosphate was 2.16 mmol/L. The fermagate 1- and 2-g three-times-daily treatment arms were associated with statistical reductions in mean serum phosphate to 1.71 and 1.47 mmol/L, respectively. Adverse event (AE) incidence in the 1-g fermagate arm was statistically comparable to the placebo group. The 2-g arm was associated with a statistically higher number of patients reporting AEs than the 1-g arm, particularly gastrointestinal AEs, as well as a higher number of discontinuations, complicating interpretation of this dose's efficacy. Both doses were associated with elevations of prehemodialysis serum magnesium levels. Conclusions: The efficacy and tolerability of fermagate were dose dependent. Fermagate showed promising efficacy in the treatment of hyperphosphatemia in chronic hemodialysis patients as compared with placebo in this initial phase II study. The optimal balance between efficacy and tolerability needs to be determined from future dose-titration studies, or fixed-dose comparisons of more doses. PMID:19158369
Luis Martínez Fuentes, Jose; Moreno, Ignacio
2018-03-05
A new technique for encoding the amplitude and phase of diffracted fields in digital holography is proposed. It is based on a random spatial multiplexing of two phase-only diffractive patterns. The first one is the phase information of the intended pattern, while the second one is a diverging optical element whose purpose is the control of the amplitude. A random number determines the choice between these two diffractive patterns at each pixel, and the amplitude information of the desired field governs its discrimination threshold. This proposed technique is computationally fast and does not require iterative methods, and the complex field reconstruction appears on axis. We experimentally demonstrate this new encoding technique with holograms implemented onto a flicker-free phase-only spatial light modulator (SLM), which allows the axial generation of such holograms. The experimental verification includes the phase measurement of generated patterns with a phase-shifting polarization interferometer implemented in the same experimental setup.
Shan, Yi-chu; Zhang, Yu-kui; Zhao, Rui-huan
2002-07-01
In high performance liquid chromatography, it is necessary to apply multi-composition gradient elution for the separation of complex samples such as environmental and biological samples. Multivariate stepwise gradient elution is one of the most efficient elution modes, because it combines the high selectivity of multi-composition mobile phase and shorter analysis time of gradient elution. In practical separations, the separation selectivity of samples can be effectively adjusted by using ternary mobile phase. For the optimization of these parameters, the retention equation of samples must be obtained at first. Traditionally, several isocratic experiments are used to get the retention equation of solute. However, it is time consuming especially for the separation of complex samples with a wide range of polarity. A new method for the fast optimization of ternary stepwise gradient elution was proposed based on the migration rule of solute in column. First, the coefficients of retention equation of solute are obtained by running several linear gradient experiments, then the optimal separation conditions are searched according to the hierarchical chromatography response function which acts as the optimization criterion. For each kind of organic modifier, two initial linear gradient experiments are used to obtain the primary coefficients of retention equation of each solute. For ternary mobile phase, only four linear gradient runs are needed to get the coefficients of retention equation. Then the retention times of solutes under arbitrary mobile phase composition can be predicted. The initial optimal mobile phase composition is obtained by resolution mapping for all of the solutes. A hierarchical chromatography response function is used to evaluate the separation efficiencies and search the optimal elution conditions. In subsequent optimization, the migrating distance of solute in the column is considered to decide the mobile phase composition and sustaining time of the latter steps until all the solutes are eluted out. Thus the first stepwise gradient elution conditions are predicted. If the resolution of samples under the predicted optimal separation conditions is satisfactory, the optimization procedure is stopped; otherwise, the coefficients of retention equation are adjusted according to the experimental results under the previously predicted elution conditions. Then the new stepwise gradient elution conditions are predicted repeatedly until satisfactory resolution is obtained. Normally, the satisfactory separation conditions can be found only after six experiments by using the proposed method. In comparison with the traditional optimization method, the time needed to finish the optimization procedure can be greatly reduced. The method has been validated by its application to the separation of several samples such as amino acid derivatives, aromatic amines, in which satisfactory separations were obtained with predicted resolution.
Optimizing event selection with the random grid search
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...
2018-02-27
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing Event Selection with the Random Grid Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
2017-06-29
The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Optimizing event selection with the random grid search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen
In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less
Bare-Bones Teaching-Learning-Based Optimization
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms. PMID:25013844
Bare-bones teaching-learning-based optimization.
Zou, Feng; Wang, Lei; Hei, Xinhong; Chen, Debao; Jiang, Qiaoyong; Li, Hongye
2014-01-01
Teaching-learning-based optimization (TLBO) algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI) algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO) is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.
Repurposing Blu-ray movie discs as quasi-random nanoimprinting templates for photon management
NASA Astrophysics Data System (ADS)
Smith, Alexander J.; Wang, Chen; Guo, Dongning; Sun, Cheng; Huang, Jiaxing
2014-11-01
Quasi-random nanostructures have attracted significant interests for photon management purposes. To optimize such patterns, typically very expensive fabrication processes are needed to create the pre-designed, subwavelength nanostructures. While quasi-random photonic nanostructures are abundant in nature (for example, in structural coloration), interestingly, they also exist in Blu-ray movie discs, an already mass-produced consumer product. Here we uncover that Blu-ray disc patterns are surprisingly well suited for light-trapping applications. While the algorithms in the Blu-ray industrial standard were developed with the intention of optimizing data compression and error tolerance, they have also created quasi-random arrangement of islands and pits on the final media discs that are nearly optimized for photon management over the solar spectrum, regardless of the information stored on the discs. As a proof-of-concept, imprinting polymer solar cells with the Blu-ray patterns indeed increases their efficiencies. Simulation suggests that Blu-ray patterns could be broadly applied for solar cells made of other materials.
Hehlmann, R; Lauseker, M; Saußele, S; Pfirrmann, M; Krause, S; Kolb, H J; Neubauer, A; Hossfeld, D K; Nerl, C; Gratwohl, A; Baerlocher, G M; Heim, D; Brümmendorf, T H; Fabarius, A; Haferlach, C; Schlegelberger, B; Müller, M C; Jeromin, S; Proetel, U; Kohlbrenner, K; Voskanyan, A; Rinaldetti, S; Seifarth, W; Spieß, B; Balleisen, L; Goebeler, M C; Hänel, M; Ho, A; Dengler, J; Falge, C; Kanz, L; Kremers, S; Burchert, A; Kneba, M; Stegelmann, F; Köhne, C A; Lindemann, H W; Waller, C F; Pfreundschuh, M; Spiekermann, K; Berdel, W E; Müller, L; Edinger, M; Mayer, J; Beelen, D W; Bentz, M; Link, H; Hertenstein, B; Fuchs, R; Wernli, M; Schlegel, F; Schlag, R; de Wit, M; Trümper, L; Hebart, H; Hahn, M; Thomalla, J; Scheid, C; Schafhausen, P; Verbeek, W; Eckart, M J; Gassmann, W; Pezzutto, A; Schenk, M; Brossart, P; Geer, T; Bildat, S; Schäfer, E; Hochhaus, A; Hasford, J
2017-11-01
Chronic myeloid leukemia (CML)-study IV was designed to explore whether treatment with imatinib (IM) at 400 mg/day (n=400) could be optimized by doubling the dose (n=420), adding interferon (IFN) (n=430) or cytarabine (n=158) or using IM after IFN-failure (n=128). From July 2002 to March 2012, 1551 newly diagnosed patients in chronic phase were randomized into a 5-arm study. The study was powered to detect a survival difference of 5% at 5 years. After a median observation time of 9.5 years, 10-year overall survival was 82%, 10-year progression-free survival was 80% and 10-year relative survival was 92%. Survival between IM400 mg and any experimental arm was not different. In a multivariate analysis, risk group, major-route chromosomal aberrations, comorbidities, smoking and treatment center (academic vs other) influenced survival significantly, but not any form of treatment optimization. Patients reaching the molecular response milestones at 3, 6 and 12 months had a significant survival advantage. For responders, monotherapy with IM400 mg provides a close to normal life expectancy independent of the time to response. Survival is more determined by patients' and disease factors than by initial treatment selection. Although improvements are also needed for refractory disease, more life-time can currently be gained by carefully addressing non-CML determinants of survival.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Gao, Duorui; Liu, Zhi; Chen, Chunyi; Lou, Yan; Jiang, Huilin
2014-11-01
Based on partially coherent polarized light transmission characteristics of the atmosphere, an intensity expression of completely coherent flashing light is derived from Andrews scale modulation method. According to the generalized Huygens-Fresnel principle and Rytov theory, the phase fluctuation structure function is obtained on condition that the refractive index profile in the atmosphere meet Von Karman spectrum, then get the arrival Angle fluctuation variance. Through the RMS beam width of gaussian beams in turbulent atmosphere, deviation angle formula of fully coherent gaussian beams in turbulence atmosphere is attained, then get the RMS beam width of partially coherent and derivation angle expression of GSM beam in turbulent atmosphere. Combined with transmission properties of radial polarized laser beam, cross spectral density matrix of partially coherent radially polarized light can be gained by using generalized huygens-fresnel principle. And light intensity and polarization after transmission can be known according to the unity of coherence and polarization theory. On the basis of the analysis model and numerical simulation, the simulation results show that: the light spot caused by atmospheric turbulence of partially coherent polarization will be superior to completely polarized light.Taking advantage of this feature, designed a new wireless suppression technology of atmospheric turbulence, that is the optimization criterion of initial degree of coherent light beam. The optimal initial degree of coherent light beam will change along with the change of atmospheric turbulence conditions,make control the beam's initial degree of coherence to realize the initial degree of coherence of light beam in real time and dynamic control. A spatial phase screen before emission aperture of fully coherent light is to generate the partially coherent light, liquid crystal spatial light modulator is is a preferable way to realize the dynamic random phase. Finally look future of the application research of partially coherent light.
Abi-Jaoude, Alexxa; Johnson, Andrew; Ferguson, Genevieve; Sanches, Marcos; Levinson, Andrea; Robb, Janine; Heffernan, Olivia; Herzog, Tyson; Chaim, Gloria; Cleverley, Kristin; Eysenbach, Gunther; Henderson, Joanna; S Hoch, Jeffrey; Hollenberg, Elisa; Jiang, Huan; Isaranuwatchai, Wanrudee; Law, Marcus; Sharpe, Sarah; Tripp, Tim; Voineskos, Aristotle
2016-01-01
Background Seventy percent of lifetime cases of mental illness emerge prior to age 24. While early detection and intervention can address approximately 70% of child and youth cases of mental health concerns, the majority of youth with mental health concerns do not receive the services they need. Objective The objective of this paper is to describe the protocol for optimizing and evaluating Thought Spot, a Web- and mobile-based platform cocreated with end users that is designed to improve the ability of students to access mental health and substance use services. Methods This project will be conducted in 2 distinct phases, which will aim to (1) optimize the existing Thought Spot electronic health/mobile health intervention through youth engagement, and (2) evaluate the impact of Thought Spot on self-efficacy for mental health help-seeking and health literacy among university and college students. Phase 1 will utilize participatory action research and participatory design research to cocreate and coproduce solutions with members of our target audience. Phase 2 will consist of a randomized controlled trial to test the hypothesis that the Thought Spot intervention will show improvements in intentions for, and self-efficacy in, help-seeking for mental health concerns. Results We anticipate that enhancements will include (1) user analytics and feedback mechanisms, (2) peer mentorship and/or coaching functionality, (3) crowd-sourcing and data hygiene, and (4) integration of evidence-based consumer health and research information. Conclusions This protocol outlines the important next steps in understanding the impact of the Thought Spot platform on the behavior of postsecondary, transition-aged youth students when they seek information and services related to mental health and substance use. PMID:27815232
Recourse-based facility-location problems in hybrid uncertain environment.
Wang, Shuming; Watada, Junzo; Pedrycz, Witold
2010-08-01
The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.
Stochastic reduced order models for inverse problems under uncertainty
Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.
2014-01-01
This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115
Phase Helps Find Geometrically Optimal Gaits
NASA Astrophysics Data System (ADS)
Revzen, Shai; Hatton, Ross
Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.
Improved specimen reconstruction by Hilbert phase contrast tomography.
Barton, Bastian; Joos, Friederike; Schröder, Rasmus R
2008-11-01
The low signal-to-noise ratio (SNR) in images of unstained specimens recorded with conventional defocus phase contrast makes it difficult to interpret 3D volumes obtained by electron tomography (ET). The high defocus applied for conventional tilt series generates some phase contrast but leads to an incomplete transfer of object information. For tomography of biological weak-phase objects, optimal image contrast and subsequently an optimized SNR are essential for the reconstruction of details such as macromolecular assemblies at molecular resolution. The problem of low contrast can be partially solved by applying a Hilbert phase plate positioned in the back focal plane (BFP) of the objective lens while recording images in Gaussian focus. Images recorded with the Hilbert phase plate provide optimized positive phase contrast at low spatial frequencies, and the contrast transfer in principle extends to the information limit of the microscope. The antisymmetric Hilbert phase contrast (HPC) can be numerically converted into isotropic contrast, which is equivalent to the contrast obtained by a Zernike phase plate. Thus, in-focus HPC provides optimal structure factor information without limiting effects of the transfer function. In this article, we present the first electron tomograms of biological specimens reconstructed from Hilbert phase plate image series. We outline the technical implementation of the phase plate and demonstrate that the technique is routinely applicable for tomography. A comparison between conventional defocus tomograms and in-focus HPC volumes shows an enhanced SNR and an improved specimen visibility for in-focus Hilbert tomography.
A new logistic dynamic particle swarm optimization algorithm based on random topology.
Ni, Qingjian; Deng, Jianming
2013-01-01
Population topology of particle swarm optimization (PSO) will directly affect the dissemination of optimal information during the evolutionary process and will have a significant impact on the performance of PSO. Classic static population topologies are usually used in PSO, such as fully connected topology, ring topology, star topology, and square topology. In this paper, the performance of PSO with the proposed random topologies is analyzed, and the relationship between population topology and the performance of PSO is also explored from the perspective of graph theory characteristics in population topologies. Further, in a relatively new PSO variant which named logistic dynamic particle optimization, an extensive simulation study is presented to discuss the effectiveness of the random topology and the design strategies of population topology. Finally, the experimental data are analyzed and discussed. And about the design and use of population topology on PSO, some useful conclusions are proposed which can provide a basis for further discussion and research.
Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun
2018-04-04
Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.
Gallegos-Lopez, Gabriel
2012-10-02
Methods, system and apparatus are provided for increasing voltage utilization in a five-phase vector controlled machine drive system that employs third harmonic current injection to increase torque and power output by a five-phase machine. To do so, a fundamental current angle of a fundamental current vector is optimized for each particular torque-speed of operating point of the five-phase machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashraf, Arman R.; Ryan, Justin J.; Satkowski, Michael M.
Block copolymers have been extensively studied due to their ability to spontaneously self-organize into a wide variety of morphologies that are valuable in energy-, medical- and conservation-related (nano)technologies. While the phase behavior of bicomponent diblock and triblock copolymers is conventionally governed by temperature and individual block masses, we demonstrate that their phase behavior can alternatively be controlled through the use of blocks with random monomer sequencing. Block random copolymers (BRCs), i.e., diblock copolymers wherein one or both blocks is a random copolymer comprised of A and B repeat units, have been synthesized, and their phase behavior, expressed in terms ofmore » the order-disorder transition (ODT), has been investigated. Our results establish that, depending on the block composition contrast and molecular weight, BRCs can microphase-separate. We also report that the predicted ODT can be generated at relatively constant molecular weight and temperature with these new soft materials. This sequence-controlled synthetic strategy is extended to thermoplastic elastomeric triblock copolymers differing in chemistry and possessing a random-copolymer midblock.« less
Hill, Alison M; Harris Jackson, Kristina A; Roussell, Michael A; West, Sheila G; Kris-Etherton, Penny M
2015-01-01
Background: Food-based dietary patterns emphasizing plant protein that were evaluated in the Dietary Approaches to Stop Hypertension (DASH) and OmniHeart trials are recommended for the treatment of metabolic syndrome (MetS). However, the contribution of plant protein to total protein in these diets is proportionally less than that of animal protein. Objective: This study compared 3 diets varying in type (animal compared with plant) and amount of protein on MetS criteria. Design: Sixty-two overweight adults with MetS consumed a healthy American diet for 2 wk before being randomly allocated to either a modified DASH diet rich in plant protein (18% protein, two-thirds plant sources, n = 9 males, 12 females), a modified DASH diet rich in animal protein (Beef in an Optimal Lean Diet: 18.4% protein, two-thirds animal sources, n = 9 males, 11 females), or a moderate-protein diet (Beef in an Optimal Lean Diet Plus Protein: 27% protein, two-thirds animal sources, n = 10 males, 11 females). Diets were compared across 3 phases of energy balance: 5 wk of controlled (all foods provided) weight maintenance (WM), 6 wk of controlled weight loss (minimum 500-kcal/d deficit) including exercise (WL), and 12 wk of prescribed, free-living weight loss (FL). The primary endpoint was change in MetS criteria. Results: All groups achieved ∼5% weight loss at the end of the WL phase and maintained it through FL, with no between-diet differences (WM compared with WL, FL, P < 0.0001; between diets, P = NS). All MetS criteria decreased independent of diet composition (main effect of phase, P < 0.01; between diets, P = NS). After WM, all groups had a MetS prevalence of 80–90% [healthy American diet (HAD) compared with WM, P = NS], which decreased to 50–60% after WL and was maintained through FL (HAD, WM vs WL, FL, P < 0.01). Conclusions: Weight loss was the primary modifier of MetS resolution in our study population regardless of protein source or amount. Our findings demonstrate that heart-healthy weight-loss dietary patterns that emphasize either animal or plant protein improve MetS criteria similarly. This study was registered at clinicaltrials.gov as NCT00937638. PMID:26354540
Feig, Denice S; Asztalos, Elizabeth; Corcoy, Rosa; De Leiva, Alberto; Donovan, Lois; Hod, Moshe; Jovanovic, Lois; Keely, Erin; Kollman, Craig; McManus, Ruth; Murphy, Kellie; Ruedy, Katrina; Sanchez, J Johanna; Tomlinson, George; Murphy, Helen R
2016-07-18
Women with type 1 diabetes strive for optimal glycemic control before and during pregnancy to avoid adverse obstetric and perinatal outcomes. For most women, optimal glycemic control is challenging to achieve and maintain. The aim of this study is to determine whether the use of real-time continuous glucose monitoring (RT-CGM) will improve glycemic control in women with type 1 diabetes who are pregnant or planning pregnancy. A multi-center, open label, randomized, controlled trial of women with type 1 diabetes who are either planning pregnancy with an HbA1c of 7.0 % to ≤10.0 % (53 to ≤ 86 mmol/mol) or are in early pregnancy (<13 weeks 6 days) with an HbA1c of 6.5 % to ≤10.0 % (48 to ≤ 86 mmol/mol). Participants will be randomized to either RT-CGM alongside conventional intermittent home glucose monitoring (HGM), or HGM alone. Eligible women will wear a CGM which does not display the glucose result for 6 days during the run-in phase. To be eligible for randomization, a minimum of 4 HGM measurements per day and a minimum of 96 hours total with 24 hours overnight (11 pm-7 am) of CGM glucose values are required. Those meeting these criteria are randomized to RT- CGM or HGM. A total of 324 women will be recruited (110 planning pregnancy, 214 pregnant). This takes into account 15 and 20 % attrition rates for the planning pregnancy and pregnant cohorts and will detect a clinically relevant 0.5 % difference between groups at 90 % power with 5 % significance. Randomization will stratify for type of insulin treatment (pump or multiple daily injections) and baseline HbA1c. Analyses will be performed according to intention to treat. The primary outcome is the change in glycemic control as measured by HbA1c from baseline to 24 weeks or conception in women planning pregnancy, and from baseline to 34 weeks gestation during pregnancy. Secondary outcomes include maternal hypoglycemia, CGM time in, above and below target (3.5-7.8 mmol/l), glucose variability measures, maternal and neonatal outcomes. This will be the first international multicenter randomized controlled trial to evaluate the impact of RT- CGM before and during pregnancy in women with type 1 diabetes. ClinicalTrials.gov Identifier: NCT01788527 Registration Date: December 19, 2012.
Encrypted optical storage with wavelength-key and random phase codes.
Matoba, O; Javidi, B
1999-11-10
An encrypted optical memory system that uses a wavelength code as well as input and Fourier-plane random phase codes is proposed. Original data are illuminated by a coherent light source with a specified wavelength and are then encrypted with two random phase codes before being stored holographically in a photorefractive material. Successful decryption requires the use of a readout beam with the same wavelength as that used in the recording, in addition to the correct phase key in the Fourier plane. The wavelength selectivity of the proposed system is evaluated numerically. We show that the number of available wavelength keys depends on the correlation length of the phase key in the Fourier plane. Preliminary experiments of encryption and decryption of optical memory in a LiNbO(3):Fe photorefractive crystal are demonstrated.
NASA Astrophysics Data System (ADS)
Al-Asadi, H. A.
2013-02-01
We present a theoretical analysis of an additional nonlinear phase shift of backward Stokes wave based on stimulated Brillouin scattering in the system with a bi-directional pumping scheme. We optimize three parameters of the system: the numerical aperture, the optical loss and the pumping wavelength to minimize an additional nonlinear phase shift of backward Stokes waves due to stimulated Brillouin scattering. The optimization is performed with various Brillouin pump powers and the optical reflectivity values are based on the modern, global evolutionary computation algorithm, particle swarm optimization. It is shown that the additional nonlinear phase shift of backward Stokes wave varies with different optical fiber lengths, and can be minimized to less than 0.07 rad according to the particle swarm optimization algorithm for 5 km. The bi-directional pumping configuration system is shown to be efficient when it is possible to transmit the power output to advanced when frequency detuning is negative and delayed when it is positive, with the optimum values of the three parameters to achieve the reduction of an additional nonlinear phase shift.
Random Wiring, Ganglion Cell Mosaics, and the Functional Architecture of the Visual Cortex
Coppola, David; White, Leonard E.; Wolf, Fred
2015-01-01
The architecture of iso-orientation domains in the primary visual cortex (V1) of placental carnivores and primates apparently follows species invariant quantitative laws. Dynamical optimization models assuming that neurons coordinate their stimulus preferences throughout cortical circuits linking millions of cells specifically predict these invariants. This might indicate that V1’s intrinsic connectome and its functional architecture adhere to a single optimization principle with high precision and robustness. To validate this hypothesis, it is critical to closely examine the quantitative predictions of alternative candidate theories. Random feedforward wiring within the retino-cortical pathway represents a conceptually appealing alternative to dynamical circuit optimization because random dimension-expanding projections are believed to generically exhibit computationally favorable properties for stimulus representations. Here, we ask whether the quantitative invariants of V1 architecture can be explained as a generic emergent property of random wiring. We generalize and examine the stochastic wiring model proposed by Ringach and coworkers, in which iso-orientation domains in the visual cortex arise through random feedforward connections between semi-regular mosaics of retinal ganglion cells (RGCs) and visual cortical neurons. We derive closed-form expressions for cortical receptive fields and domain layouts predicted by the model for perfectly hexagonal RGC mosaics. Including spatial disorder in the RGC positions considerably changes the domain layout properties as a function of disorder parameters such as position scatter and its correlations across the retina. However, independent of parameter choice, we find that the model predictions substantially deviate from the layout laws of iso-orientation domains observed experimentally. Considering random wiring with the currently most realistic model of RGC mosaic layouts, a pairwise interacting point process, the predicted layouts remain distinct from experimental observations and resemble Gaussian random fields. We conclude that V1 layout invariants are specific quantitative signatures of visual cortical optimization, which cannot be explained by generic random feedforward-wiring models. PMID:26575467
Complex-energy approach to sum rules within nuclear density functional theory
Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...
2015-04-27
The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less
Antecedent and Consequence of School Academic Optimism and Teachers' Academic Optimism Model
ERIC Educational Resources Information Center
Hong, Fu-Yuan
2017-01-01
The main purpose of this research was to examine the relationships among school principals' transformational leadership, school academic optimism, teachers' academic optimism and teachers' professional commitment. This study conducted a questionnaire survey on 367 teachers from 20 high schools in Taiwan by random sampling, using principals'…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartkiewicz, Karol; Miranowicz, Adam
We find an optimal quantum cloning machine, which clones qubits of arbitrary symmetrical distribution around the Bloch vector with the highest fidelity. The process is referred to as phase-independent cloning in contrast to the standard phase-covariant cloning for which an input qubit state is a priori better known. We assume that the information about the input state is encoded in an arbitrary axisymmetric distribution (phase function) on the Bloch sphere of the cloned qubits. We find analytical expressions describing the optimal cloning transformation and fidelity of the clones. As an illustration, we analyze cloning of qubit state described by themore » von Mises-Fisher and Brosseau distributions. Moreover, we show that the optimal phase-independent cloning machine can be implemented by modifying the mirror phase-covariant cloning machine for which quantum circuits are known.« less
Yasui, S; Young, L R
1984-01-01
Smooth pursuit and saccadic components of foveal visual tracking as well as more involuntary ocular movements of optokinetic (o.k.n.) and vestibular nystagmus slow phase components were investigated in man, with particular attention given to their possible input-adaptive or predictive behaviour. Each component in question was isolated from the eye movement records through a computer-aided procedure. The frequency response method was used with sinusoidal (predictable) and pseudo-random (unpredictable) stimuli. When the target motion was pseudo-random, the frequency response of pursuit eye movements revealed a large phase lead (up to about 90 degrees) at low stimulus frequencies. It is possible to interpret this result as a predictive effect, even though the stimulation was pseudo-random and thus 'unpredictable'. The pseudo-random-input frequency response intrinsic to the saccadic system was estimated in an indirect way from the pursuit and composite (pursuit + saccade) frequency response data. The result was fitted well by a servo-mechanism model, which has a simple anticipatory mechanism to compensate for the inherent neuromuscular saccadic delay by utilizing the retinal slip velocity signal. The o.k.n. slow phase also exhibited a predictive effect with sinusoidal inputs; however, pseudo-random stimuli did not produce such phase lead as found in the pursuit case. The vestibular nystagmus slow phase showed no noticeable sign of prediction in the frequency range examined (0 approximately 0.7 Hz), in contrast to the results of the visually driven eye movements (i.e. saccade, pursuit and o.k.n. slow phase) at comparable stimulus frequencies. PMID:6707954
Absolute Stability Analysis of a Phase Plane Controlled Spacecraft
NASA Technical Reports Server (NTRS)
Jang, Jiann-Woei; Plummer, Michael; Bedrossian, Nazareth; Hall, Charles; Jackson, Mark; Spanos, Pol
2010-01-01
Many aerospace attitude control systems utilize phase plane control schemes that include nonlinear elements such as dead zone and ideal relay. To evaluate phase plane control robustness, stability margin prediction methods must be developed. Absolute stability is extended to predict stability margins and to define an abort condition. A constrained optimization approach is also used to design flex filters for roll control. The design goal is to optimize vehicle tracking performance while maintaining adequate stability margins. Absolute stability is shown to provide satisfactory stability constraints for the optimization.
Bhatia, Amit; Singh, Bhupinder; Raza, Kaisar; Wadhwa, Sheetu; Katare, Om Prakash
2013-02-28
Lecithin organogels (LOs) are semi-solid systems with immobilized organic liquid phase in 3-D network of self-assembled gelators. This paper attempts to study the various attributes of LOs, starting from selection of materials, optimization of influential components to LO specific characterization. After screening of various components (type of gelators, organic and aqueous phase) and construction of phase diagrams, a D-optimal mixture design was employed for the systematic optimization of the LO composition. The response surface plots were constructed for various response variables, viz. viscosity, gel strength, spreadability and consistency index. The optimized LO composition was searched employing overlay plots. Subsequent validation of the optimization study employing check-point formulations, located using grid search, indicated high degree of prognostic ability of the experimental design. The optimized formulation was characterized for morphology, drug content, rheology, spreadability, pH, phase transition temperatures, and physical and chemical stability. The outcomes of the study were interesting showing high dependence of LO attributes on the type and amount of phospholipid, Poloxamer™, auxillary gelators and organic solvent. The optimized LO was found to be quite stable, easily applicable and biocompatible. The findings of the study can be utilized for the development of LO systems of other drugs for the safer and effective topical delivery. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.
Information content versus word length in random typing
NASA Astrophysics Data System (ADS)
Ferrer-i-Cancho, Ramon; Moscoso del Prado Martín, Fermín
2011-12-01
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al 2011 Proc. Nat. Acad. Sci. 108 3825). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi and co-workers. In itself, the linear relation does not entail the results of any optimization process.
NASA Astrophysics Data System (ADS)
Zhao, Hui; Wei, Jingxuan
2014-09-01
The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.
Optimizing phonon space in the phonon-coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
On the utilization of engineering knowledge in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, P.
1984-01-01
Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.
NASA Astrophysics Data System (ADS)
Hamuro, Yoshitomo
2017-05-01
Protein backbone amide hydrogen/deuterium exchange mass spectrometry (HDX-MS) typically utilizes enzymatic digestion after the exchange reaction and before MS analysis to improve data resolution. Gas-phase fragmentation of a peptic fragment prior to MS analysis is a promising technique to further increase the resolution. The biggest technical challenge for this method is elimination of intramolecular hydrogen/deuterium exchange (scrambling) in the gas phase. The scrambling obscures the location of deuterium. Jørgensen's group pioneered a method to minimize the scrambling in gas-phase electron capture/transfer dissociation. Despite active investigation, the mechanism of hydrogen scrambling is not well-understood. The difficulty stems from the fact that the degree of hydrogen scrambling depends on instruments, various parameters of mass analysis, and peptide analyzed. In most hydrogen scrambling investigations, the hydrogen scrambling is measured by the percentage of scrambling in a whole molecule. This paper demonstrates that the degree of intramolecular hydrogen/deuterium exchange depends on the nature of exchangeable hydrogen sites. The deuterium on Tyr amide of neurotensin (9-13), Arg-Pro-Tyr-Ile-Leu, migrated significantly faster than that on Ile or Leu amides, indicating the loss of deuterium from the original sites is not mere randomization of hydrogen and deuterium but more site-specific phenomena. This more precise approach may help understand the mechanism of intramolecular hydrogen exchange and provide higher confidence for the parameter optimization to eliminate intramolecular hydrogen/deuterium exchange during gas-phase fragmentation.
Hamuro, Yoshitomo
2017-05-01
Protein backbone amide hydrogen/deuterium exchange mass spectrometry (HDX-MS) typically utilizes enzymatic digestion after the exchange reaction and before MS analysis to improve data resolution. Gas-phase fragmentation of a peptic fragment prior to MS analysis is a promising technique to further increase the resolution. The biggest technical challenge for this method is elimination of intramolecular hydrogen/deuterium exchange (scrambling) in the gas phase. The scrambling obscures the location of deuterium. Jørgensen's group pioneered a method to minimize the scrambling in gas-phase electron capture/transfer dissociation. Despite active investigation, the mechanism of hydrogen scrambling is not well-understood. The difficulty stems from the fact that the degree of hydrogen scrambling depends on instruments, various parameters of mass analysis, and peptide analyzed. In most hydrogen scrambling investigations, the hydrogen scrambling is measured by the percentage of scrambling in a whole molecule. This paper demonstrates that the degree of intramolecular hydrogen/deuterium exchange depends on the nature of exchangeable hydrogen sites. The deuterium on Tyr amide of neurotensin (9-13), Arg-Pro-Tyr-Ile-Leu, migrated significantly faster than that on Ile or Leu amides, indicating the loss of deuterium from the original sites is not mere randomization of hydrogen and deuterium but more site-specific phenomena. This more precise approach may help understand the mechanism of intramolecular hydrogen exchange and provide higher confidence for the parameter optimization to eliminate intramolecular hydrogen/deuterium exchange during gas-phase fragmentation. Graphical Abstract ᅟ.
Li, Hongfei; Yang, Zhenhua; Pan, Cheng; ...
2017-07-14
Here, we report that the addition of a non-photoactive tertiary polymer phase in the binary bulk heterojunction (BHJ) polymer solar cell leads to a self-assembled columnar nanostructure, enhancing the charge mobilities and photovoltaic efficiency with surprisingly increased optimal active blend thicknesses over 300 nm, 3–4 times larger than that of the binary counterpart. Using the prototypical poly(3-hexylthiophene) (P3HT):fullerene blend as a model BHJ system, we discover that the inert poly(methyl methacrylate) (PMMA) added in the binary BHJ blend self-assembles into vertical columns, which not only template the phase segregation of electron acceptor fullerenes but also induce the out-of-plane rotation ofmore » the edge-on-orientated crystalline P3HT phase. Using complementary interrogation methods including neutron reflectivity, X-ray scattering, atomic force microscopy, transmission electron microscopy, and molecular dynamics simulations, we show that the enhanced charge transport originates from the more randomized molecular stacking of the P3HT phase and the spontaneous segregation of fullerenes at the P3HT/PMMA interface, driven by the high surface tension between the two polymeric components. The results demonstrate a potential method for increasing the thicknesses of high-performance polymer BHJ solar cells with improved photovoltaic efficiency, alleviating the burden of stringently controlling the ultrathin blend thickness during the roll-to-roll-type large-area manufacturing environment.« less
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
Tian, Yuzhen; Guo, Jin; Wang, Rui; Wang, Tingfeng
2011-09-12
In order to research the statistical properties of Gaussian beam propagation through an arbitrary thickness random phase screen for adaptive optics and laser communication application in the laboratory, we establish mathematic models of statistical quantities, which are based on the Rytov method and the thin phase screen model, involved in the propagation process. And the analytic results are developed for an arbitrary thickness phase screen based on the Kolmogorov power spectrum. The comparison between the arbitrary thickness phase screen and the thin phase screen shows that it is more suitable for our results to describe the generalized case, especially the scintillation index.
Sui, Sai; Ma, Hua; Lv, Yueguang; Wang, Jiafu; Li, Zhiqiang; Zhang, Jieqiu; Xu, Zhuo; Qu, Shaobo
2018-01-22
Arbitrary control of electromagnetic waves remains a significant challenge although it promises many important applications. Here, we proposed a fast optimization method of designing a wideband metasurface without using the Pancharatnam-Berry (PB) phase, of which the elements are non-absorptive and capable of predicting the wideband and smooth phase-shift. In our design method, the metasurface is composed of low-Q-factor resonant elements without using the PB phase, and is optimized by the genetic algorithm and nonlinear fitting method, having the advantages that the far field scattering patterns can be quickly synthesized by the hybrid array patterns. To validate the design method, a wideband low radar cross section metasurface is demonstrated, showing good feasibility and performance of wideband RCS reduction. This work reveals an opportunity arising from a metasurface in effective manipulation of microwave and flexible fast optimal design method.
Optimization of gear ratio and power distribution for a multimotor powertrain of an electric vehicle
NASA Astrophysics Data System (ADS)
Urbina Coronado, Pedro Daniel; Orta Castañón, Pedro; Ahuett-Garza, Horacio
2018-02-01
The architecture and design of the propulsion system of electric vehicles are highly important for the reduction of energy losses. This work presents a powertrain composed of four electric motors in which each motor is connected with a different gear ratio to the differential of the rear axle. A strategy to reduce energy losses is proposed, in which two phases are applied. Phase 1 uses a divide-and-conquer approach to increase the overall output efficiency by obtaining the optimal torque distribution for the electric motors. Phase 2 applies a genetic algorithm to find the optimal value of the gear ratios, in which each individual of each generation applies Phase 1. The results show an optimized efficiency map for the output torque and speed of the powertrain. The increase in efficiency and the reduction of energy losses are validated by the use of numerical experiments in various driving cycles.
Configuration-shape-size optimization of space structures by material redistribution
NASA Technical Reports Server (NTRS)
Vandenbelt, D. N.; Crivelli, L. A.; Felippa, C. A.
1993-01-01
This project investigates the configuration-shape-size optimization (CSSO) of orbiting and planetary space structures. The project embodies three phases. In the first one the material-removal CSSO method introduced by Kikuchi and Bendsoe (KB) is further developed to gain understanding of finite element homogenization techniques as well as associated constrained optimization algorithms that must carry along a very large number (thousands) of design variables. In the CSSO-KB method an optimal structure is 'carved out' of a design domain initially filled with finite elements, by allowing perforations (microholes) to develop, grow and merge. The second phase involves 'materialization' of space structures from the void, thus reversing the carving process. The third phase involves analysis of these structures for construction and operational constraints, with emphasis in packaging and deployment. The present paper describes progress in selected areas of the first project phase and the start of the second one.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang Baolong; Department of Mathematics and Physics, Hefei University, Hefei 230022; Yang Zhen
We propose a scheme for implementing a partial general quantum cloning machine with superconducting quantum-interference devices coupled to a nonresonant cavity. By regulating the time parameters, our system can perform optimal symmetric (asymmetric) universal quantum cloning, optimal symmetric (asymmetric) phase-covariant cloning, and optimal symmetric economical phase-covariant cloning. In the scheme the cavity is only virtually excited, thus, the cavity decay is suppressed during the cloning operations.
Glassy phases and driven response of the phase-field-crystal model with random pinning.
Granato, E; Ramos, J A P; Achim, C V; Lehikoinen, J; Ying, S C; Ala-Nissila, T; Elder, K R
2011-09-01
We study the structural correlations and the nonlinear response to a driving force of a two-dimensional phase-field-crystal model with random pinning. The model provides an effective continuous description of lattice systems in the presence of disordered external pinning centers, allowing for both elastic and plastic deformations. We find that the phase-field crystal with disorder assumes an amorphous glassy ground state, with only short-ranged positional and orientational correlations, even in the limit of weak disorder. Under increasing driving force, the pinned amorphous-glass phase evolves into a moving plastic-flow phase and then, finally, a moving smectic phase. The transverse response of the moving smectic phase shows a vanishing transverse critical force for increasing system sizes.
Role of Statistical Random-Effects Linear Models in Personalized Medicine.
Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose
2012-03-01
Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.
NASA Astrophysics Data System (ADS)
Liao, Zhikun; Lu, Dawei; Hu, Jiemin; Zhang, Jun
2018-04-01
For the random hopping frequency signal, the modulated frequencies are randomly distributed over given bandwidth. The randomness of modulated frequency not only improves the electronic counter countermeasure capability for radar systems, but also determines its performance of range compression. In this paper, the range ambiguity function of RHF signal is firstly derived. Then, a design method of frequency hopping pattern based on stationary phase principle to improve the peak to side-lobe ratio is proposed. Finally, the simulated experiments show a good effectiveness of the presented design method.
Suzuki, Kazuyuki; Endo, Ryujin; Takikawa, Yasuhiro; Moriyasu, Fuminori; Aoyagi, Yutaka; Moriwaki, Hisataka; Terai, Shuji; Sakaida, Isao; Sakai, Yoshiyuki; Nishiguchi, Shuhei; Ishikawa, Toru; Takagi, Hitoshi; Naganuma, Atsushi; Genda, Takuya; Ichida, Takafumi; Takaguchi, Koichi; Miyazawa, Katsuhiko; Okita, Kiwamu
2018-05-01
The efficacy and safety of rifaximin in the treatment of hepatic encephalopathy (HE) are widely known, but they have not been confirmed in Japanese patients with HE. Thus, two prospective, randomized studies (a phase II/III study and a phase III study) were carried out. Subjects with grade I or II HE and hyperammonemia were enrolled. The phase II/III study, which was a randomized, evaluator-blinded, active-comparator, parallel-group study, was undertaken at 37 institutions in Japan. Treatment periods were 14 days. Eligible patients were randomized to the rifaximin group (1200 mg/day) or the lactitol group (18-36 g/day). The phase III study was carried out in the same patients previously enrolled in the phase II/III study, and they were all treated with rifaximin (1200 mg/day) for 10 weeks. In the phase II/III study, 172 patients were enrolled. Blood ammonia (B-NH 3 ) concentration was significantly improved in the rifaximin group, but the difference between the two groups was not significant. The portal systemic encephalopathy index (PSE index), including HE grade, was significantly improved in both groups. In the phase III study, 87.3% of enrolled patients completed the treatment. The improved B-NH 3 concentration and PSE index were well maintained from the phase II/III study during the treatment period of the phase III study. Adverse drug reactions (ADRs) were seen in 13.4% of patients who received rifaximin, but there were no severe ADRs leading to death. The efficacy of rifaximin is sufficient and treatment is well tolerated in Japanese patients with HE and hyperammonemia. © 2017 The Japan Society of Hepatology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hongfei; Yang, Zhenhua; Pan, Cheng
Here, we report that the addition of a non-photoactive tertiary polymer phase in the binary bulk heterojunction (BHJ) polymer solar cell leads to a self-assembled columnar nanostructure, enhancing the charge mobilities and photovoltaic efficiency with surprisingly increased optimal active blend thicknesses over 300 nm, 3–4 times larger than that of the binary counterpart. Using the prototypical poly(3-hexylthiophene) (P3HT):fullerene blend as a model BHJ system, we discover that the inert poly(methyl methacrylate) (PMMA) added in the binary BHJ blend self-assembles into vertical columns, which not only template the phase segregation of electron acceptor fullerenes but also induce the out-of-plane rotation ofmore » the edge-on-orientated crystalline P3HT phase. Using complementary interrogation methods including neutron reflectivity, X-ray scattering, atomic force microscopy, transmission electron microscopy, and molecular dynamics simulations, we show that the enhanced charge transport originates from the more randomized molecular stacking of the P3HT phase and the spontaneous segregation of fullerenes at the P3HT/PMMA interface, driven by the high surface tension between the two polymeric components. The results demonstrate a potential method for increasing the thicknesses of high-performance polymer BHJ solar cells with improved photovoltaic efficiency, alleviating the burden of stringently controlling the ultrathin blend thickness during the roll-to-roll-type large-area manufacturing environment.« less
Partial information, market efficiency, and anomalous continuous phase transition
NASA Astrophysics Data System (ADS)
Yang, Guang; Zheng, Wenzhi; Huang, Jiping
2014-04-01
It is a common belief in economics and social science that if there is more information available for agents to gather in a human system, the system can become more efficient. The belief can be easily understood according to the well-known efficient market hypothesis. In this work, we attempt to challenge this belief by investigating a complex adaptive system, which is modeled by a market-directed resource-allocation game with a directed random network. We conduct a series of controlled human experiments in the laboratory to show the reliability of the model design. As a result, we find that even under a small information concentration, the system can still almost reach the optimal (balanced) state. Furthermore, the ensemble average of the system’s fluctuation level goes through a continuous phase transition. This behavior means that in the second phase if too much information is shared among agents, the system’s stability will be harmed instead, which differs from the belief mentioned above. Also, at the transition point, the ensemble fluctuations of the fluctuation level remain at a low value. This phenomenon is in contrast to the textbook knowledge about continuous phase transitions in traditional physical systems, namely, fluctuations will rise abnormally around a transition point since the correlation length becomes infinite. Thus, this work is of potential value to a variety of fields, such as physics, economics, complexity science, and artificial intelligence.
Leinweber, Felix C; Tallarek, Ulrich
2003-07-18
Monolithic chromatographic support structures offer, as compared to the conventional particulate materials, a unique combination of high bed permeability, optimized solute transport to and from the active surface sites and a high loading capacity by the introduction of hierarchical order in the interconnected pore network and the possibility to independently manipulate the contributing sets of pores. While basic principles governing flow resistance, axial dispersion and adsorption capacity are remaining identical, and a similarity to particulate systems can be well recognized on that basis, a direct comparison of sphere geometry with monolithic structures is less obvious due, not least, to the complex shape of theskeleton domain. We present here a simple, widely applicable, phenomenological approach for treating single-phase incompressible flow through structures having a continuous, rigid solid phase. It relies on the determination of equivalent particle (sphere) dimensions which characterize the corresponding behaviour in a particulate, i.e. discontinuous bed. Equivalence is then obtained by dimensionless scaling of macroscopic fluid dynamical behaviour, hydraulic permeability and hydrodynamic dispersion in both types of materials, without needing a direct geometrical translation of their constituent units. Differences in adsorption capacity between particulate and monolithic stationary phases show that the silica-based monoliths with a bimodal pore size distribution provide, due to the high total porosity of the material of more than 90%, comparable maximum loading capacities with respect to random-close packings of completely porous spheres.
Chiral NNLOsat descriptions of nuclear multipole resonances within the random-phase approximation
NASA Astrophysics Data System (ADS)
Wu, Q.; Hu, B. S.; Xu, F. R.; Ma, Y. Z.; Dai, S. J.; Sun, Z. H.; Jansen, G. R.
2018-05-01
We study nuclear multipole resonances in the framework of the random-phase approximation by using the chiral potential NNLOsat. This potential includes two- and three-body terms that have been simultaneously optimized to low-energy nucleon-nucleon scattering data and selected nuclear structure data. Our main focuses have been the isoscalar monopole, isovector dipole, and isoscalar quadrupole resonances of the closed-shell nuclei, 4He,
NASA Technical Reports Server (NTRS)
Ham, Yoo-Geun; Schubert, Siegfried; Chang, Yehui
2012-01-01
An initialization strategy, tailored to the prediction of the Madden-Julian oscillation (MJO), is evaluated using the Goddard Earth Observing System Model, version 5 (GEOS-5), coupled general circulation model (CGCM). The approach is based on the empirical singular vectors (ESVs) of a reduced-space statistically determined linear approximation of the full nonlinear CGCM. The initial ESV, extracted using 10 years (1990-99) of boreal winter hindcast data, has zonal wind anomalies over the western Indian Ocean, while the final ESV (at a forecast lead time of 10 days) reflects a propagation of the zonal wind anomalies to the east over the Maritime Continent an evolution that is characteristic of the MJO. A new set of ensemble hindcasts are produced for the boreal winter season from 1990 to 1999 in which the leading ESV provides the initial perturbations. The results are compared with those from a set of control hindcasts generated using random perturbations. It is shown that the ESV-based predictions have a systematically higher bivariate correlation skill in predicting the MJO compared to those using the random perturbations. Furthermore, the improvement in the skill depends on the phase of the MJO. The ESV is particularly effective in increasing the forecast skill during those phases of the MJO in which the control has low skill (with correlations increasing by as much as 0.2 at 20 25-day lead times), as well as during those times in which the MJO is weak.
Fuzzy probabilistic design of water distribution networks
NASA Astrophysics Data System (ADS)
Fu, Guangtao; Kapelan, Zoran
2011-05-01
The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.
Hierarchical random walks in trace fossils and the origin of optimal search behavior
Sims, David W.; Reynolds, Andrew M.; Humphries, Nicolas E.; Southall, Emily J.; Wearmouth, Victoria J.; Metcalfe, Brett; Twitchett, Richard J.
2014-01-01
Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa. PMID:25024221
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo
2018-01-01
An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Security authentication using phase-encoded nanoparticle structures and polarized light.
Carnicer, Artur; Hassanfiroozi, Amir; Latorre-Carmona, Pedro; Huang, Yi-Pai; Javidi, Bahram
2015-01-15
Phase-encoded nanostructures such as quick response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase-encoded QR codes. The system is illuminated using polarized light, and the QR code is encoded using a phase-only random mask. Using classification algorithms, it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase-encoded QR codes using polarimetric signatures.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
NASA Astrophysics Data System (ADS)
Kim, Young-Min; Jung, In-Ho
2015-06-01
A complete literature review, critical evaluation, and thermodynamic optimization of phase equilibrium and thermodynamic properties of all available oxide phases in the MnO-B2O3 and MnO-B2O3-SiO2 systems at 1 bar pressure are presented. Due to the lack of the experimental data in these systems, the systematic trend of CaO- and MgO-containing systems were taken into account in the optimization. The molten oxide phase is described by the Modified Quasichemical Model. A set of optimized model parameters of all phases is obtained which reproduces all available and reliable thermodynamic and phase equilibrium data. The unexplored binary and ternary phase diagrams of the MnO-B2O3 and MnO-B2O3-SiO2 systems have been predicted for the first time. The thermodynamic calculations relevant to the oxidation of advanced high-strength steels containing boron were performed to find that B can form liquid B2O3-SiO2-rich phase in the annealing furnace under reducing N2-H2 atmosphere, which can significantly influence the wetting behavior of liquid Zn in Zn galvanizing process.
Single-random-phase holographic encryption of images
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.
2017-02-01
In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.
1976-05-01
random walk photon scattering, geometric optics refraction at a thin phase screen, plane wave scattering from a thin screen in the Fraunhofer limit and...significant cases. In the geometric optics regime the distribution of density of allowable multipath rays is gsslanly distributed and the power...3.1 Random Walk Approach to Scattering 10 3.2 Phase Screen Approximation to Strong Scattering 13 3.3 Ray Optics and Stationary Phase Analysis 21 3,3,1
Many-body localization in Ising models with random long-range interactions
NASA Astrophysics Data System (ADS)
Li, Haoyuan; Wang, Jia; Liu, Xia-Ji; Hu, Hui
2016-12-01
We theoretically investigate the many-body localization phase transition in a one-dimensional Ising spin chain with random long-range spin-spin interactions, Vi j∝|i-j |-α , where the exponent of the interaction range α can be tuned from zero to infinitely large. By using exact diagonalization, we calculate the half-chain entanglement entropy and the energy spectral statistics and use them to characterize the phase transition towards the many-body localization phase at infinite temperature and at sufficiently large disorder strength. We perform finite-size scaling to extract the critical disorder strength and the critical exponent of the divergent localization length. With increasing α , the critical exponent experiences a sharp increase at about αc≃1.2 and then gradually decreases to a value found earlier in a disordered short-ranged interacting spin chain. For α <αc , we find that the system is mostly localized and the increase in the disorder strength may drive a transition between two many-body localized phases. In contrast, for α >αc , the transition is from a thermalized phase to the many-body localization phase. Our predictions could be experimentally tested with an ion-trap quantum emulator with programmable random long-range interactions, or with randomly distributed Rydberg atoms or polar molecules in lattices.
Donoho, David L; Gavish, Matan; Montanari, Andrea
2013-05-21
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki
2010-07-05
We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.
Optimal phase estimation with arbitrary a priori knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demkowicz-Dobrzanski, Rafal
2011-06-15
The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less
Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko
2012-01-01
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.
Theory of the amplitude-phase retrieval in any linear-transform system and its applications
NASA Astrophysics Data System (ADS)
Yang, Guozhen; Gu, Ben-Yuan; Dong, Bi-Zhen
1992-12-01
This paper is a summary of the theory of the amplitude-phase retrieval problem in any linear transform system and its applications based on our previous works in the past decade. We describe the general statement on the amplitude-phase retrieval problem in an imaging system and derive a set of equations governing the amplitude-phase distribution in terms of the rigorous mathematical derivation. We then show that, by using these equations and an iterative algorithm, a variety of amplitude-phase problems can be successfully handled. We carry out the systematic investigations and comprehensive numerical calculations to demonstrate the utilization of this new algorithm in various transform systems. For instance, we have achieved the phase retrieval from two intensity measurements in an imaging system with diffraction loss (non-unitary transform), both theoretically and experimentally, and the recovery of model real image from its Hartley-transform modulus only in one and two dimensional cases. We discuss the achievement of the phase retrieval problem from a single intensity only based on the sampling theorem and our algorithm. We also apply this algorithm to provide an optimal design of the phase-adjusted plate for a phase-adjustment focusing laser accelerator and a design approach of single phase-only element for implementing optical interconnect. In order to closely simulate the really measured data, we examine the reconstruction of image from its spectral modulus corrupted by a random noise in detail. The results show that the convergent solution can always be obtained and the quality of the recovered image is satisfactory. We also indicated the relationship and distinction between our algorithm and the original Gerchberg- Saxton algorithm. From these studies, we conclude that our algorithm shows great capability to deal with the comprehensive phase-retrieval problems in the imaging system and the inverse problem in solid state physics. It may open a new way to solve important inverse source problems extensively appearing in physics.
Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira
2018-02-05
A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Stochastic dynamics and combinatorial optimization
NASA Astrophysics Data System (ADS)
Ovchinnikov, Igor V.; Wang, Kang L.
2017-11-01
Natural dynamics is often dominated by sudden nonlinear processes such as neuroavalanches, gamma-ray bursts, solar flares, etc., that exhibit scale-free statistics much in the spirit of the logarithmic Ritcher scale for earthquake magnitudes. On phase diagrams, stochastic dynamical systems (DSs) exhibiting this type of dynamics belong to the finite-width phase (N-phase for brevity) that precedes ordinary chaotic behavior and that is known under such names as noise-induced chaos, self-organized criticality, dynamical complexity, etc. Within the recently proposed supersymmetric theory of stochastic dynamics, the N-phase can be roughly interpreted as the noise-induced “overlap” between integrable and chaotic deterministic dynamics. As a result, the N-phase dynamics inherits the properties of the both. Here, we analyze this unique set of properties and conclude that the N-phase DSs must naturally be the most efficient optimizers: on one hand, N-phase DSs have integrable flows with well-defined attractors that can be associated with candidate solutions and, on the other hand, the noise-induced attractor-to-attractor dynamics in the N-phase is effectively chaotic or aperiodic so that a DS must avoid revisiting solutions/attractors thus accelerating the search for the best solution. Based on this understanding, we propose a method for stochastic dynamical optimization using the N-phase DSs. This method can be viewed as a hybrid of the simulated and chaotic annealing methods. Our proposition can result in a new generation of hardware devices for efficient solution of various search and/or combinatorial optimization problems.
Optimal lunar soft landing trajectories using taboo evolutionary programming
NASA Astrophysics Data System (ADS)
Mutyalarao, M.; Raj, M. Xavier James
A safe lunar landing is a key factor to undertake an effective lunar exploration. Lunar lander consists of four phases such as launch phase, the earth-moon transfer phase, circumlunar phase and landing phase. The landing phase can be either hard landing or soft landing. Hard landing means the vehicle lands under the influence of gravity without any deceleration measures. However, soft landing reduces the vertical velocity of the vehicle before landing. Therefore, for the safety of the astronauts as well as the vehicle lunar soft landing with an acceptable velocity is very much essential. So it is important to design the optimal lunar soft landing trajectory with minimum fuel consumption. Optimization of Lunar Soft landing is a complex optimal control problem. In this paper, an analysis related to lunar soft landing from a parking orbit around Moon has been carried out. A two-dimensional trajectory optimization problem is attempted. The problem is complex due to the presence of system constraints. To solve the time-history of control parameters, the problem is converted into two point boundary value problem by using the maximum principle of Pontrygen. Taboo Evolutionary Programming (TEP) technique is a stochastic method developed in recent years and successfully implemented in several fields of research. It combines the features of taboo search and single-point mutation evolutionary programming. Identifying the best unknown parameters of the problem under consideration is the central idea for many space trajectory optimization problems. The TEP technique is used in the present methodology for the best estimation of initial unknown parameters by minimizing objective function interms of fuel requirements. The optimal estimation subsequently results into an optimal trajectory design of a module for soft landing on the Moon from a lunar parking orbit. Numerical simulations demonstrate that the proposed approach is highly efficient and it reduces the minimum fuel consumption. The results are compared with the available results in literature shows that the solution of present algorithm is better than some of the existing algorithms. Keywords: soft landing, trajectory optimization, evolutionary programming, control parameters, Pontrygen principle.
Tjønna, Arnt Erik; Ramos, Joyce S; Pressler, Axel; Halle, Martin; Jungbluth, Klaus; Ermacora, Erika; Salvesen, Øyvind; Rodrigues, Jhennyfer; Bueno, Carlos Roberto; Munk, Peter Scott; Coombes, Jeff; Wisløff, Ulrik
2018-04-02
Metabolic syndrome substantially increases risk of cardiovascular events. It is therefore imperative to develop or optimize ways to prevent or attenuate this condition. Exercise training has been long recognized as a corner-stone therapy for reducing individual cardiovascular risk factors constituting the metabolic syndrome. However, the optimal exercise dose and its feasibility in a real world setting has yet to be established. The primary objective of this randomized trial is to investigate the effects of different volumes of aerobic interval training (AIT) compared to the current exercise guideline of moderate-intensity continuous training (MICT) on the composite number of cardiovascular disease risk factors constituting the metabolic syndrome after a 16 week, 1-year, and 3-year follow-up. This is a randomized international multi-center trial including men and women aged ≥30 years diagnosed with the metabolic syndrome according to the International Diabetes Federation criteria. Recruitment began in August 2012 and concluded in December 2016. This trial consists of supervised and unsupervised phases to evaluate the efficacy and feasibility of different exercise doses on the metabolic syndrome in a real world setting. This study aims to include and randomize 465 participants to 3 years of one of the following training groups: i) 3 times/week of 4 × 4 min AIT at 85-95% peak heart rate (HRpeak); ii) 3 times/week of 1 × 4 min AIT at 85-95% HRpeak; or iii) 5-7 times/week of ≥30 min MICT at 60-70% HRpeak. Clinical examinations, physical tests and questionnaires are administered to all participants during all testing time points (baseline, 16 weeks and after 1-, and 3-years). This multi-center international trial indeed aims to ease the burden in healthcare/economic cost arising from treating end-stage CVD related conditions such as stroke and myocardial infarction, that could eventually emerge from the metabolic syndrome condition. Clinical registration number: NCT01676870 , ClinicalTrials.gov (August 31, 2012).
Moseley, Merrick J; Wallace, Michael P; Stephens, David A; Fielder, Alistair R; Smith, Laura C; Stewart, Catherine E
2015-04-25
Amblyopia is the commonest visual disorder of childhood in Western societies, affecting, predominantly, spatial visual function. Treatment typically requires a period of refractive correction ('optical treatment') followed by occlusion: covering the nonamblyopic eye with a fabric patch for varying daily durations. Recent studies have provided insight into the optimal amount of patching ('dose'), leading to the adoption of standardized dosing strategies, which, though an advance on previous ad-hoc regimens, take little account of individual patient characteristics. This trial compares the effectiveness of a standardized dosing strategy (that is, a fixed daily occlusion dose based on disease severity) with a personalized dosing strategy (derived from known treatment dose-response functions), in which an initially prescribed occlusion dose is modulated, in a systematic manner, dependent on treatment compliance. A total of 120 children aged between 3 and 8 years of age diagnosed with amblyopia in association with either anisometropia or strabismus, or both, will be randomized to receive either a standardized or a personalized occlusion dose regimen. To avoid confounding by the known benefits of refractive correction, participants will not be randomized until they have completed an optical treatment phase. The primary study objective is to determine whether, at trial endpoint, participants receiving a personalized dosing strategy require fewer hours of occlusion than those in receipt of a standardized dosing strategy. Secondary objectives are to quantify the relationship between observed changes in visual acuity (logMAR, logarithm of the Minimum Angle of Resolution) with age, amblyopia type, and severity of amblyopic visual acuity deficit. This is the first randomized controlled trial of occlusion therapy for amblyopia to compare a treatment arm representative of current best practice with an arm representative of an entirely novel treatment regimen based on statistical modelling of previous trial outcome data. Should the personalized dosing strategy demonstrate superiority over the standardized dosing strategy, then its adoption into routine practice could bring practical benefits in reducing the duration of treatment needed to achieve an optimal outcome. ISRCTN ISRCTN12292232.
Investigations into polymer and carbon nanomaterial separations
NASA Astrophysics Data System (ADS)
Owens, Cherie Nicole
The work of this thesis follows a common theme of research focused on innovative separation science. Polyhydroxyalkanoates are biodegradable polyesters produced by bacteria that can have a wide distribution in molecular weight and monomer composition. This large distribution often leads to unpredictable physical properties making commercial applications challenging. To improve polymer homogeneity and obtain samples with a clear set of physical characteristics, poly-3-hydroxyvalerate-co-3-hydroxybutyrate copolymers were fractionated using gradient polymer elution chromatography (GPEC) with carefully optimized gradients. The resulting fractions were analyzed using Size Exclusion Chromatography (SEC) and NMR. As the percentage of “good” solvent was increased in the mobile phase, the polymers eluted with decreasing percentage of 3-hydroxyvalerate and increasing molecular weight, which indicates the importance of precipitation/redissolution in the separation. As such, GPEC is an excellent choice to provide polyhydroxyalkanoate samples with a narrower distribution in composition than the original bulk copolymer. Additionally, the critical condition was found for 3-hydroxybutyrate to erase its effects on retention of the copolymer. Copolymer samples were then separated using Liquid Chromatography at the Critical Condition (LCCC) and it was determined that poly(3-hydroxvalerate-co-3-hydroxybutyrate) is a statistically random copolymer. The second project uses ultra-thin layer chromatography (UTLC) to study the performance and behavior of polyhydroxybutyrate (P3HB) as a chromatographic substrate. One specific polyhydroxyalkanoate, polyhydroxybutyrate, is a liquid crystalline polymer that can be electrospun. Electrospinning involves the formation of nanofibers though the application of an electric potential to a polymer solution. Precisely controlled optimization of electrospinning parameters was conducted to achieve the smallest diameter PHA nanofibers to date to utilize as novel UTLC substrates. Additionally, aligned electrospun UTLC (AE-UTLC) substrates were developed to compare to the randomly oriented electrospun (E-UTLC) devices. The PHB plates were compared to commercially available substrates for the separation of biological samples: nucleotides and steroids. The electrospun substrates show lower band broadening and higher reproducibility in a smaller development distance than commercially available TLC plates, conserving both resources and time. The AE-UTLC plates provided further enhancement of reproducibility and development time compared to E-UTLC plates. Thus, the P3HB E-UTLC phases are an excellent sustainable option for TLC as they are biodegradable and perform better than commercial phases. A third topic of interest is the study of ordered carbon nanomaterials. The typical amorphous carbon used as a stationary phase in Hypercarb ® is known to consist of basal- and edge-plane oriented sites. This heterogeneity of the stationary phase can lead to peak broadening that may be improved by using homogeneous carbon throughout. Amorphous, basal-plane, and edge-plane carbons were produced in-house through membrane template synthesis. Amorphous, basal-plane, and edge-plane carbons were then used separately as chromatographic phases in capillary electrochomatography (CEC). Differences in chromatographic performance between these species were assessed by modeling retention data for test solutes to determine Linear Solvation Energy Relationships (LSER). The LSER study for the three carbon phases indicates that the main difference is in the polarizability, and hydrogen bonding character of the surface leading to unique solute interactions. These results highlight the possible usefulness of using these phases independently.
Unsolved Problems of Intracellular Noise
NASA Astrophysics Data System (ADS)
Paulsson, Johan
2003-05-01
Many molecules are present at so low numbers per cell that significant fluctuations arise spontaneously. Such `noise' can randomize developmental pathways, disrupt cell cycle control or force metabolites away from their optimal levels. It can also be exploited for non-genetic individuality or, surprisingly, for more reliable and deterministic control. However, in spite of the mechanistic and evolutionary significance of noise, both explicit modeling and implicit verbal reasoning in molecular biology are completely dominated by macroscopic kinetics. Here I discuss some particularly under-addressed issues of noise in genetic and metabolic networks: 1) relations between systematic macro- and mesoscopic approaches; 2) order and disorder in gene expression; 3) autorepression for checking fluctuations; 4) noise suppression by noise; 5) phase-transitions in metabolic systems; 6) effects of cell growth and division; and 7) mono- and bistable bimodal switches.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
Reichhardt, Charles; Olson Reichhardt, Cynthia Jane
2016-12-20
Here, we review the depinning and nonequilibrium phases of collectively interacting particle systems driven over random or periodic substrates. This type of system is relevant to vortices in type-II superconductors, sliding charge density waves, electron crystals, colloids, stripe and pattern forming systems, and skyrmions, and could also have connections to jamming, glassy behaviors, and active matter. These systems are also ideal for exploring the broader issues of characterizing transient and steady state nonequilibrium flow phases as well as nonequilibrium phase transitions between distinct dynamical phases, analogous to phase transitions between different equilibrium states. We discuss the differences between elastic andmore » plastic depinning on random substrates and the different types of nonequilibrium phases which are associated with specific features in the velocity-force curves, fluctuation spectra, scaling relations, and local or global particle ordering. We describe how these quantities can change depending on the dimension, anisotropy, disorder strength, and the presence of hysteresis. Within the moving phase we discuss how there can be a transition from a liquid-like state to dynamically ordered moving crystal, smectic, or nematic states. Systems with periodic or quasiperiodic substrates can have multiple nonequilibrium second or first order transitions in the moving state between chaotic and coherent phases, and can exhibit hysteresis. We also discuss systems with competing repulsive and attractive interactions, which undergo dynamical transitions into stripes and other complex morphologies when driven over random substrates. Throughout this work we highlight open issues and future directions such as absorbing phase transitions, nonequilibrium work relations, inertia, the role of non-dissipative dynamics such as Magnus effects, and how these results could be extended to the broader issues of plasticity in crystals, amorphous solids, and jamming phenomena.« less
NASA Astrophysics Data System (ADS)
Reichhardt, C.; Olson Reichhardt, C. J.
2017-02-01
We review the depinning and nonequilibrium phases of collectively interacting particle systems driven over random or periodic substrates. This type of system is relevant to vortices in type-II superconductors, sliding charge density waves, electron crystals, colloids, stripe and pattern forming systems, and skyrmions, and could also have connections to jamming, glassy behaviors, and active matter. These systems are also ideal for exploring the broader issues of characterizing transient and steady state nonequilibrium flow phases as well as nonequilibrium phase transitions between distinct dynamical phases, analogous to phase transitions between different equilibrium states. We discuss the differences between elastic and plastic depinning on random substrates and the different types of nonequilibrium phases which are associated with specific features in the velocity-force curves, fluctuation spectra, scaling relations, and local or global particle ordering. We describe how these quantities can change depending on the dimension, anisotropy, disorder strength, and the presence of hysteresis. Within the moving phase we discuss how there can be a transition from a liquid-like state to dynamically ordered moving crystal, smectic, or nematic states. Systems with periodic or quasiperiodic substrates can have multiple nonequilibrium second or first order transitions in the moving state between chaotic and coherent phases, and can exhibit hysteresis. We also discuss systems with competing repulsive and attractive interactions, which undergo dynamical transitions into stripes and other complex morphologies when driven over random substrates. Throughout this work we highlight open issues and future directions such as absorbing phase transitions, nonequilibrium work relations, inertia, the role of non-dissipative dynamics such as Magnus effects, and how these results could be extended to the broader issues of plasticity in crystals, amorphous solids, and jamming phenomena.
Application of phase-change materials in memory taxonomy.
Wang, Lei; Tu, Liang; Wen, Jing
2017-01-01
Phase-change materials are suitable for data storage because they exhibit reversible transitions between crystalline and amorphous states that have distinguishable electrical and optical properties. Consequently, these materials find applications in diverse memory devices ranging from conventional optical discs to emerging nanophotonic devices. Current research efforts are mostly devoted to phase-change random access memory, whereas the applications of phase-change materials in other types of memory devices are rarely reported. Here we review the physical principles of phase-change materials and devices aiming to help researchers understand the concept of phase-change memory. We classify phase-change memory devices into phase-change optical disc, phase-change scanning probe memory, phase-change random access memory, and phase-change nanophotonic device, according to their locations in memory hierarchy. For each device type we discuss the physical principles in conjunction with merits and weakness for data storage applications. We also outline state-of-the-art technologies and future prospects.
Robustness of optimal random searches in fragmented environments
NASA Astrophysics Data System (ADS)
Wosniack, M. E.; Santos, M. C.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.
2015-05-01
The random search problem is a challenging and interdisciplinary topic of research in statistical physics. Realistic searches usually take place in nonuniform heterogeneous distributions of targets, e.g., patchy environments and fragmented habitats in ecological systems. Here we present a comprehensive numerical study of search efficiency in arbitrarily fragmented landscapes with unlimited visits to targets that can only be found within patches. We assume a random walker selecting uniformly distributed turning angles and step lengths from an inverse power-law tailed distribution with exponent μ . Our main finding is that for a large class of fragmented environments the optimal strategy corresponds approximately to the same value μopt≈2 . Moreover, this exponent is indistinguishable from the well-known exact optimal value μopt=2 for the low-density limit of homogeneously distributed revisitable targets. Surprisingly, the best search strategies do not depend (or depend only weakly) on the specific details of the fragmentation. Finally, we discuss the mechanisms behind this observed robustness and comment on the relevance of our results to both the random search theory in general, as well as specifically to the foraging problem in the biological context.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
NASA Astrophysics Data System (ADS)
Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong
2017-10-01
Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.
Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest
Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan
2018-01-01
Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Hierarchical Solution of the Traveling Salesman Problem with Random Dyadic Tilings
NASA Astrophysics Data System (ADS)
Kalmár-Nagy, Tamás; Bak, Bendegúz Dezső
We propose a hierarchical heuristic approach for solving the Traveling Salesman Problem (TSP) in the unit square. The points are partitioned with a random dyadic tiling and clusters are formed by the points located in the same tile. Each cluster is represented by its geometrical barycenter and a “coarse” TSP solution is calculated for these barycenters. Midpoints are placed at the middle of each edge in the coarse solution. Near-optimal (or optimal) minimum tours are computed for each cluster. The tours are concatenated using the midpoints yielding a solution for the original TSP. The method is tested on random TSPs (independent, identically distributed points in the unit square) up to 10,000 points as well as on a popular benchmark problem (att532 — coordinates of 532 American cities). Our solutions are 8-13% longer than the optimal ones. We also present an optimization algorithm for the partitioning to improve our solutions. This algorithm further reduces the solution errors (by several percent using 1000 iteration steps). The numerical experiments demonstrate the viability of the approach.
NASA Astrophysics Data System (ADS)
Moran, Steve E.; Lugannani, Robert; Craig, Peter N.; Law, Robert L.
1989-02-01
An analysis is made of the performance of an optically phase-locked electronic speckle pattern interferometer in the presence of random noise displacements. Expressions for the phase-locked speckle contrast for single-frame imagery and the composite rms exposure for two sequentially subtracted frames are obtained in terms of the phase-locked composite and single-frame fringe functions. The noise fringe functions are evaluated for stationary, coherence-separable noise displacements obeying Gauss-Markov temporal statistics. The theoretical findings presented here are qualitatively supported by experimental results.
Optimizing separate phase light hydrocarbon recovery from contaminated unconfined aquifers
NASA Astrophysics Data System (ADS)
Cooper, Grant S.; Peralta, Richard C.; Kaluarachchi, Jagath J.
A modeling approach is presented that optimizes separate phase recovery of light non-aqueous phase liquids (LNAPL) for a single dual-extraction well in a homogeneous, isotropic unconfined aquifer. A simulation/regression/optimization (S/R/O) model is developed to predict, analyze, and optimize the oil recovery process. The approach combines detailed simulation, nonlinear regression, and optimization. The S/R/O model utilizes nonlinear regression equations describing system response to time-varying water pumping and oil skimming. Regression equations are developed for residual oil volume and free oil volume. The S/R/O model determines optimized time-varying (stepwise) pumping rates which minimize residual oil volume and maximize free oil recovery while causing free oil volume to decrease a specified amount. This S/R/O modeling approach implicitly immobilizes the free product plume by reversing the water table gradient while achieving containment. Application to a simple representative problem illustrates the S/R/O model utility for problem analysis and remediation design. When compared with the best steady pumping strategies, the optimal stepwise pumping strategy improves free oil recovery by 11.5% and reduces the amount of residual oil left in the system due to pumping by 15%. The S/R/O model approach offers promise for enhancing the design of free phase LNAPL recovery systems and to help in making cost-effective operation and management decisions for hydrogeologists, engineers, and regulators.
Levesque, Janelle V; Lambert, Sylvie D; Girgis, Afaf; Turner, Jane; McElduff, Patrick; Kayser, Karen
2015-01-01
To (a) determine whether the information provided to men with prostate cancer and their partners in the immediate postdiagnostic phase met their needs; and (b) examine patient and partner satisfaction with the information received. Pre-intervention survey data from a pilot randomized controlled trial of a self-directed coping skills intervention involving 42 patients with prostate cancer, and their partners were collected to examine their psychosocial concerns/needs. The main concerns for patients and partners were psychosocial in nature such as managing emotions, concern about the future, and losing control. Overall, patients and partners received most information about tests and treatment options. Partners reported receiving significantly less information about support services ( P = 0.03) and self-care strategies ( P = 0.03) compared to patients. Partners also reported being significantly less satisfied with the information they received ( P = 0.007). Whereas medical information is routinely given, patients and partners may benefit from greater information about psychosocial issues arising from cancer. Despite increased recognition of partner's information needs these still remain unmet.
The training intensity distribution among well-trained and elite endurance athletes
Stöggl, Thomas L.; Sperlich, Billy
2015-01-01
Researchers have retrospectively analyzed the training intensity distribution (TID) of nationally and internationally competitive athletes in different endurance disciplines to determine the optimal volume and intensity for maximal adaptation. The majority of studies present a “pyramidal” TID with a high proportion of high volume, low intensity training (HVLIT). Some world-class athletes appear to adopt a so-called “polarized” TID (i.e., significant % of HVLIT and high-intensity training) during certain phases of the season. However, emerging prospective randomized controlled studies have demonstrated superior responses of variables related to endurance when applying a polarized TID in well-trained and recreational individuals when compared with a TID that emphasizes HVLIT or threshold training. The aims of the present review are to: (1) summarize the main responses of retrospective and prospective studies exploring TID; (2) provide a systematic overview on TIDs during preparation, pre-competition, and competition phases in different endurance disciplines and performance levels; (3) address whether one TID has demonstrated greater efficacy than another; and (4) highlight research gaps in an effort to direct future scientific studies. PMID:26578968
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
The training intensity distribution among well-trained and elite endurance athletes.
Stöggl, Thomas L; Sperlich, Billy
2015-01-01
Researchers have retrospectively analyzed the training intensity distribution (TID) of nationally and internationally competitive athletes in different endurance disciplines to determine the optimal volume and intensity for maximal adaptation. The majority of studies present a "pyramidal" TID with a high proportion of high volume, low intensity training (HVLIT). Some world-class athletes appear to adopt a so-called "polarized" TID (i.e., significant % of HVLIT and high-intensity training) during certain phases of the season. However, emerging prospective randomized controlled studies have demonstrated superior responses of variables related to endurance when applying a polarized TID in well-trained and recreational individuals when compared with a TID that emphasizes HVLIT or threshold training. The aims of the present review are to: (1) summarize the main responses of retrospective and prospective studies exploring TID; (2) provide a systematic overview on TIDs during preparation, pre-competition, and competition phases in different endurance disciplines and performance levels; (3) address whether one TID has demonstrated greater efficacy than another; and (4) highlight research gaps in an effort to direct future scientific studies.
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1988-01-01
Several key issues involved in the application of formal optimization technique to helicopter airframe structures for vibration reduction are addressed. Considerations which are important in the optimization of real airframe structures are discussed. Considerations necessary to establish relevant set of design variables, constraints and objectives which are appropriate to conceptual, preliminary, detailed design, ground and flight test phases of airframe design are discussed. A methodology is suggested for optimization of airframes in various phases of design. Optimization formulations that are unique to helicopter airframes are described and expressions for vibration related functions are derived. Using a recently developed computer code, the optimization of a Bell AH-1G helicopter airframe is demonstrated.
Apker Award Recipient: Renormalization-Group Study of Helium Mixtures Immersed in a Porous Medium
NASA Astrophysics Data System (ADS)
Lopatnikova, Anna
1998-03-01
Superfluidity and phase separation in ^3He-^4He mixtures immersed in aerogel are studied by renormalization-group theory. Firstly, the theory is applied to jungle-gym (non-random) aerogel.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 55, 3798 (1997).) This calculation is conducted via the coupled renormalization-group mappings of interactions near and away from aerogel. Superfluidity at very low ^4He concentrations and a depressed tricritical temperature are found at the onset of superfludity. A superfluid-superfluid phase separation, terminating at an isolated critical point, is found entirely within the superfluid phase. Secondly, the theory is applied to true aerogel, which has quenched disorder at both atomic and geometric levels.(A. Lopatnikova and A.N. Berker, Phys. Rev. B 56, 11865 (1997).) This calculation is conducted via the coupled renormalization-group mappings, near and away from aerogel, of quenched probability distributions of random interactions. Random-bond effects on superfluidity onset and random-field effects on superfluid phase separation are seen. The quenched randomness causes the λ line of second-order phase transitions of superfluidity onset to reach zero temperature, in agreement with general prediction and experiments. Based on these studies, the experimentally observed(S.B. Kim, J. Ma, and M.H.W. Chan, Phys. Rev. Lett. 71, 2268 (1993); N. Mulders and M.H.W. Chan, Phys. Rev. Lett. 75, 3705 (1995).) distinctive characteristics of ^3He-^4He mixtures in aerogel are related to the aerogel properties of connectivity, tenuousness, and atomic and geometric randomness.
Optimization of vehicle deceleration to reduce occupant injury risks in frontal impact.
Mizuno, Koji; Itakura, Takuya; Hirabayashi, Satoko; Tanaka, Eiichi; Ito, Daisuke
2014-01-01
In vehicle frontal impacts, vehicle acceleration has a large effect on occupant loadings and injury risks. In this research, an optimal vehicle crash pulse was determined systematically to reduce injury measures of rear seat occupants by using mathematical simulations. The vehicle crash pulse was optimized based on a vehicle deceleration-deformation diagram under the conditions that the initial velocity and the maximum vehicle deformation were constant. Initially, a spring-mass model was used to understand the fundamental parameters for optimization. In order to investigate the optimization under a more realistic situation, the vehicle crash pulse was also optimized using a multibody model of a Hybrid III dummy seated in the rear seat for the objective functions of chest acceleration and chest deflection. A sled test using a Hybrid III dummy was carried out to confirm the simulation results. Finally, the optimal crash pulses determined from the multibody simulation were applied to a human finite element (FE) model. The optimized crash pulse to minimize the occupant deceleration had a concave shape: a high deceleration in the initial phase, low in the middle phase, and high again in the final phase. This crash pulse shape depended on the occupant restraint stiffness. The optimized crash pulse determined from the multibody simulation was comparable to that from the spring-mass model. From the sled test, it was demonstrated that the optimized crash pulse was effective for the reduction of chest acceleration. The crash pulse was also optimized for the objective function of chest deflection. The optimized crash pulse in the final phase was lower than that obtained for the minimization of chest acceleration. In the FE analysis of the human FE model, the optimized pulse for the objective function of the Hybrid III chest deflection was effective in reducing rib fracture risks. The optimized crash pulse has a concave shape and is dependent on the occupant restraint stiffness and maximum vehicle deformation. The shapes of the optimized crash pulse in the final phase were different for the objective functions of chest acceleration and chest deflection due to the inertial forces of the head and upper extremities. From the human FE model analysis it was found that the optimized crash pulse for the Hybrid III chest deflection can substantially reduce the risk of rib cage fractures. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.
Lee, Ki Hyeong; Kim, Ji-Yeon; Lee, Moon Hee; Han, Hye Sook; Lim, Joo Han; Park, Keon Uk; Park, In Hae; Cho, Eun Kyung; Yoon, So Young; Kim, Jee Hyun; Choi, In Sil; Park, Jae Hoo; Choi, Young Jin; Kim, Hee-Jun; Jung, Kyung Hae; Kim, Si-Young; Oh, Do-Youn; Im, Seock-Ah
2016-04-01
Pegylated granulocyte-colony-stimulating factor (G-CSF) is frequently used to prevent febrile neutropenia (FN) in patients undergoing chemotherapy with a high risk of myelosuppression. This phase II/III study was conducted to determine the adequate dose of pegteograstim, a new formulation of pegylated G-CSF, and to evaluate the efficacy and safety of pegteograstim compared to pegfilgrastim. In the phase II part, 60 breast cancer patients who were undergoing DA (docetaxel and doxorubicin) or TAC (docetaxel, doxorubicin, and cyclophosphamide) chemotherapy were randomly selected to receive a single subcutaneous injection of 3.6 or 6.0 mg pegteograstim on day 2 of each chemotherapy cycle. The phase III part was seamlessly started to compare the dose of pegteograstim at selected in phase II with 6.0 mg pegfilgrastim in 117 breast cancer patients. The primary endpoint of both the phase II and III parts was the duration of grade 4 neutropenia in the chemotherapy cycle 1. The mean duration of grade 4 neutropenia for the 3.6 mg pegteograstim (n = 33) was similar to that for the 6.0 mg pegteograstim (n = 26) (1.97 ± 1.79 days vs. 1.54 ± 0.95 days, p = 0.33). The 6.0 mg pegteograstim was selected to be compared with the 6.0 mg pegfilgrastim in the phase III part. In the phase III part, the primary analysis revealed that the efficacy of pegteograstim (n = 56) was non-inferior to that of pegfilgrastim (n = 59) [duration of grade 4 neutropenia, 1.64 ± 1.18 days vs. 1.80 ± 1.05 days; difference, -0.15 ± 1.11 (p = 0.36, 97.5 % confidence intervals = 0.57 and 0.26)]. The time to the absolute neutrophil count (ANC) recovery of pegteograstim (≥2000/μL) was significantly shorter than that of pegfilgrastim (8.85 ± 1.45 days vs. 9.83 ± 1.20 days, p < 0.0001). Other secondary endpoints showed no significant difference between the two groups. The safety profiles of the two groups did not differ significantly. Pegteograstim was shown to be as effective as pegfilgrastim in the reduction of chemotherapy-induced neutropenia in the breast cancer patients who were undergoing chemotherapy with a high risk of myelosuppression.
ERIC Educational Resources Information Center
Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.
2017-01-01
Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…
Clogging and depinning of ballistic active matter systems in disordered media
NASA Astrophysics Data System (ADS)
Reichhardt, C.; Reichhardt, C. J. O.
2018-05-01
We numerically examine ballistic active disks driven through a random obstacle array. Formation of a pinned or clogged state occurs at much lower obstacle densities for the active disks than for passive disks. As a function of obstacle density, we identify several distinct phases including a depinned fluctuating cluster state, a pinned single-cluster or jammed state, a pinned multicluster state, a pinned gel state, and a pinned disordered state. At lower active disk densities, a drifting uniform liquid forms in the absence of obstacles, but when even a small number of obstacles are introduced, the disks organize into a pinned phase-separated cluster state in which clusters nucleate around the obstacles, similar to a wetting phenomenon. We examine how the depinning threshold changes as a function of disk or obstacle density and find a crossover from a collectively pinned cluster state to a disordered plastic depinning transition as a function of increasing obstacle density. We compare this to the behavior of nonballistic active particles and show that as we vary the activity from completely passive to completely ballistic, a clogged phase-separated state appears in both the active and passive limits, while for intermediate activity, a readily flowing liquid state appears and there is an optimal activity level that maximizes the flux through the sample.
NASA Astrophysics Data System (ADS)
Lyu, Yuexi; Han, Xi; Sun, Yaoyao; Jiang, Zhi; Guo, Chunyan; Xiang, Wei; Dong, Yinan; Cui, Jie; Yao, Yuan; Jiang, Dongwei; Wang, Guowei; Xu, Yingqiang; Niu, Zhichuan
2018-01-01
We report on the growth of high quality GaSb-based AlInAsSb quaternary alloy by molecular beam epitaxy (MBE) to fabricate avalanche photodiodes (APDs). By means of high resolution X-ray diffraction (HRXRD) and scanning transmission electron microscope (STEM), phase separation phenomenon of AlInAsSb random alloy with naturally occurring vertical superlattice configuration was demonstrated. To overcome the tendency for phase segregation while maintaining a highly crystalline film, a digital alloy technique with migration-enhanced epitaxy growth method was employed, using a shutter sequence of AlSb, AlAs, AlSb, Sb, In, InAs, In, Sb. AlInAsSb digital alloy has proved to be reproducible and consistent with single phase, showing sharp satellite peaks on HRXRD rocking curve and smooth surface morphology under atomic force microscopy (AFM). Using optimized digital alloy, AlInAsSb separate absorption, grading, charge, and multiplication (SAGCM) APD was grown and fabricated. At room temperature, the device showed high performance with low dark current density of ∼14.1 mA/cm2 at 95% breakdown and maximum stable gain before breakdown as high as ∼200, showing the potential for further applications in optoelectronic devices.
NASA Astrophysics Data System (ADS)
Lu, Xuekun; Heenan, Thomas M. M.; Bailey, Josh J.; Li, Tao; Li, Kang; Brett, Daniel J. L.; Shearing, Paul R.
2017-10-01
This study aims to correlate the active triple phase boundaries (TPBs) to the variation of as-prepared anode microstructures and Ni densifications based on the reconstructed 3D volume of an SOFC anode, providing a point of comparison with theoretical studies that reveal the relationship of TPBs and the material microstructure using randomly packed spheres models. The TPB degradation mechanisms are explained using a particle network model. The results indicate that in low porosity regime, the TPBs sharply increase with the porosity until the percolation threshold (10%); at intermediate porosity (10%-25%), a balance of surface area between three phases is more critical than that of volume fraction to reach the optimal TPB density; in the high porosity regime (>25%), the TPBs start to drop due to the shrinkage and detachment of Ni/YSZ interfaces. The TPB density is inversely proportional to the degree of Ni densification as long as the Ni content is above the percolation threshold (35%) and can be improved by 70% within 7% change of porosity provided that the over-densification is mitigated. This has implications for the design of SOFC microstructures as well for electrode durability, where Ni agglomeration is known to deleteriously impact long-term operation.
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
An Expert System-Driven Method for Parametric Trajectory Optimization During Conceptual Design
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen; Diaz, Manuel J.; Holt, James B.
2015-01-01
During the early phases of engineering design, the costs committed are high, costs incurred are low, and the design freedom is high. It is well documented that decisions made in these early design phases drive the entire design's life cycle cost. In a traditional paradigm, key design decisions are made when little is known about the design. As the design matures, design changes become more difficult in both cost and schedule to enact. The current capability-based paradigm, which has emerged because of the constrained economic environment, calls for the infusion of knowledge usually acquired during later design phases into earlier design phases, i.e. bringing knowledge acquired during preliminary and detailed design into pre-conceptual and conceptual design. An area of critical importance to launch vehicle design is the optimization of its ascent trajectory, as the optimal trajectory will be able to take full advantage of the launch vehicle's capability to deliver a maximum amount of payload into orbit. Hence, the optimal ascent trajectory plays an important role in the vehicle's affordability posture yet little of the information required to successfully optimize a trajectory is known early in the design phase. Thus, the current paradigm of optimizing ascent trajectories involves generating point solutions for every change in a vehicle's design parameters. This is often a very tedious, manual, and time-consuming task for the analysts. Moreover, the trajectory design space is highly non-linear and multi-modal due to the interaction of various constraints. When these obstacles are coupled with the Program to Optimize Simulated Trajectories (POST), an industry standard program to optimize ascent trajectories that is difficult to use, expert trajectory analysts are required to effectively optimize a vehicle's ascent trajectory. Over the course of this paper, the authors discuss a methodology developed at NASA Marshall's Advanced Concepts Office to address these issues. The methodology is two-fold: first, capture the heuristics developed by human analysts over their many years of experience; and secondly, leverage the power of modern computing to evaluate multiple trajectories simultaneously and therefore enable the exploration of the trajectory's design space early during the pre- conceptual and conceptual phases of design. This methodology is coupled with design of experiments in order to train surrogate models, which enables trajectory design space visualization and parametric optimal ascent trajectory information to be available when early design decisions are being made.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouri, Drew Philip; Surowiec, Thomas M.
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
Kouri, Drew Philip; Surowiec, Thomas M.
2018-06-05
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430
Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.
Teng, Chaoyi; Demers, Hendrix; Brodusch, Nicolas; Waters, Kristian; Gauvin, Raynald
2018-06-04
A number of techniques for the characterization of rare earth minerals (REM) have been developed and are widely applied in the mining industry. However, most of them are limited to a global analysis due to their low spatial resolution. In this work, phase map analyses were performed on REM with an annular silicon drift detector (aSDD) attached to a field emission scanning electron microscope. The optimal conditions for the aSDD were explored, and the high-resolution phase maps generated at a low accelerating voltage identify phases at the micron scale. In comparisons between an annular and a conventional SDD, the aSDD performed at optimized conditions, making the phase map a practical solution for choosing an appropriate grinding size, judging the efficiency of different separation processes, and optimizing a REM beneficiation flowsheet.
Coma measurement by use of an alternating phase-shifting mask mark with a specific phase width
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Zicheng; Wang Xiangzhao; Yuan Qiongyan
2009-01-10
The correlation between the coma sensitivity of the alternating phase-shifting mask (Alt-PSM) mark and the mark's structure is studied based on the Hopkins theory of partially coherent imaging and positive resist optical lithography (PROLITH) simulation. It is found that an optimized Alt-PSM mark with its phase width being two-thirds its pitch has a higher sensitivity to coma than Alt-PSM marks with the same pitch and the different phase widths. The pitch of the Alt-PSM mark is also optimized by PROLITH simulation, and the structure of p=1.92{lambda}/NA and pw=2p/3 proves to be with the highest sensitivity. The optimized Alt-PSM mark ismore » used as a measurement mark to retrieve coma aberration from the projection optics in lithographic tools. In comparison with an ordinary Alt-PSM mark with its phase width being a half its pitch, the measurement accuracies of Z7 and Z14 apparently increase.« less