Multi-Positioning Mathematics Class Size: Teachers' Views
ERIC Educational Resources Information Center
Handal, Boris; Watson, Kevin; Maher, Marguerite
2015-01-01
This paper explores mathematics teachers' perceptions about class size and the impact class size has on teaching and learning in secondary mathematics classrooms. It seeks to understand teachers' views about optimal class sizes and their thoughts about the education variables that influence these views. The paper draws on questionnaire responses…
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
A Heuristics Approach for Classroom Scheduling Using Genetic Algorithm Technique
NASA Astrophysics Data System (ADS)
Ahmad, Izah R.; Sufahani, Suliadi; Ali, Maselan; Razali, Siti N. A. M.
2018-04-01
Reshuffling and arranging classroom based on the capacity of the audience, complete facilities, lecturing time and many more may lead to a complexity of classroom scheduling. While trying to enhance the productivity in classroom planning, this paper proposes a heuristic approach for timetabling optimization. A new algorithm was produced to take care of the timetabling problem in a university. The proposed of heuristics approach will prompt a superior utilization of the accessible classroom space for a given time table of courses at the university. Genetic Algorithm through Java programming languages were used in this study and aims at reducing the conflicts and optimizes the fitness. The algorithm considered the quantity of students in each class, class time, class size, time accessibility in each class and lecturer who in charge of the classes.
Improved Dot Diffusion For Image Halftoning
1999-01-01
The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is
Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach
NASA Technical Reports Server (NTRS)
Das, Santanu; Oza, Nikunj C.
2011-01-01
In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
NASA Technical Reports Server (NTRS)
Spangelo, Sara
2015-01-01
The goal of this paper is to explore the mission opportunities that are uniquely enabled by U-class Solar Electric Propulsion (SEP) technologies. Small SEP thrusters offers significant advantages relative to existing technologies and will revolutionize the class of mission architectures that small spacecraft can accomplish by enabling trajectory maneuvers with significant change in velocity requirements and reaction wheel-free attitude control. This paper aims to develop and apply a common system-level modeling framework to evaluate these thrusters for relevant upcoming mission scenarios, taking into account the mass, power, volume, and operational constraints of small highly-constrained missions. We will identify the optimal technology for broad classes of mission applications for different U-class spacecraft sizes and provide insights into what constrains the system performance to identify technology areas where improvements are needed.
Deeper and sparser nets are optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1998-03-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
Deeper sparsely nets are size-optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1997-12-01
The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions (Horne, 1994); and (ii) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimalmore » solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e. minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I
2017-01-01
A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.
Form and Objective of the Decision Rule in Absolute Identification
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1997-01-01
In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-06-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
Crack Resistance of Welded Joints of Pipe Steels of Strength Class K60 of Different Alloying Systems
NASA Astrophysics Data System (ADS)
Tabatchikova, T. I.; Tereshchenko, N. A.; Yakovleva, I. L.; Makovetskii, A. N.; Shander, S. V.
2018-03-01
The crack resistance of welded joints of pipe steels of strength class K60 and different alloying systems is studied. The parameter of the crack tip opening displacement (CTOD) is shown to be dependent on the size of the austenite grains and on the morphology of bainite in the superheated region of the heat-affected zone of the weld. The crack resistance is shown to be controllable due to optimization of the alloying system.
NASA Astrophysics Data System (ADS)
Koelle, D. E.; Mueller, W.; Schweig, H.
1985-10-01
The standardized propulsion module for future spacecraft in the 1800-2700 kg class is described. The definition of the propulsion system and its thrust level are addressed, and the design of the orbital propulsion module (OPM) is shown and described. The masses of various components are given. The OPM application and size optimization for the Ariane 4 launchers are examined, and the cost-saving aspects of OPM and its space applications are discussed.
NASA Astrophysics Data System (ADS)
Kumar, Amit; Dorodnikov, Maxim; Splettstößer, Thomas; Kuzyakov, Yakov; Pausch, Johanna
2017-04-01
Soil aggregation and microbial activities within the aggregates are important factors regulating soil carbon (C) turnover. A reliable and sensitive proxy for microbial activity is activity of extracellular enzymes (EEA). In the present study, effects of soil aggregates on EEA were investigated under three maize plant densities (Low, Normal, and High). Bulk soil was fractionated into three aggregate size classes (>2000 µm large macroaggregates; 2000-250 µm small macroaggregates; <250 µm microaggregates) by optimal-moisture sieving. Microbial biomass and EEA (β-1,4-glucosidase (BG), β-1,4-N-acetylglucosaminidase (NAG), L-leucine aminopeptidase (LAP) and acid phosphatase (acP)) catalyzing soil organic matter (SOM) decomposition were measured in rooted soil of maize and soil from bare fallow. Microbial biomass C (Cmic) decreased with decreasing aggregate size classes. Potential and specific EEA (per unit of Cmic) increased from macro- to microaggregates. In comparison with bare fallow soil, specific EEA of microaggregates in rooted soil was higher by up to 73%, 31%, 26%, and 92% for BG, NAG, acP and LAP, respectively. Moreover, high plant density decreased macroaggregates by 9% compared to bare fallow. Enhanced EEA in three aggregate size classes demonstrated activation of microorganisms by roots. Strong EEA in microaggregates can be explained by microaggregates' localization within the soil. Originally adhering to surfaces of macroaggregates, microaggregates were preferentially exposed to C substrates and nutrients, thereby promoting microbial activity.
An iterative approach to optimize change classification in SAR time series data
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2016-10-01
The detection of changes using remote sensing imagery has become a broad field of research with many approaches for many different applications. Besides the simple detection of changes between at least two images acquired at different times, analyses which aim on the change type or category are at least equally important. In this study, an approach for a semi-automatic classification of change segments is presented. A sparse dataset is considered to ensure the fast and simple applicability for practical issues. The dataset is given by 15 high resolution (HR) TerraSAR-X (TSX) amplitude images acquired over a time period of one year (11/2013 to 11/2014). The scenery contains the airport of Stuttgart (GER) and its surroundings, including urban, rural, and suburban areas. Time series imagery offers the advantage of analyzing the change frequency of selected areas. In this study, the focus is set on the analysis of small-sized high frequently changing regions like parking areas, construction sites and collecting points consisting of high activity (HA) change objects. For each HA change object, suitable features are extracted and a k-means clustering is applied as the categorization step. Resulting clusters are finally compared to a previously introduced knowledge-based class catalogue, which is modified until an optimal class description results. In other words, the subjective understanding of the scenery semantics is optimized by the data given reality. Doing so, an even sparsely dataset containing only amplitude imagery can be evaluated without requiring comprehensive training datasets. Falsely defined classes might be rejected. Furthermore, classes which were defined too coarsely might be divided into sub-classes. Consequently, classes which were initially defined too narrowly might be merged. An optimal classification results when the combination of previously defined key indicators (e.g., number of clusters per class) reaches an optimum.
NASA Astrophysics Data System (ADS)
Coutris, Pierre; Leroy, Delphine; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter
2016-04-01
A new method to retrieve cloud water content from in-situ measured 2D particle images from optical array probes (OAP) is presented. With the overall objective to build a statistical model of crystals' mass as a function of their size, environmental temperature and crystal microphysical history, this study presents the methodology to retrieve the mass of crystals sorted by size from 2D images using a numerical optimization approach. The methodology is validated using two datasets of in-situ measurements gathered during two airborne field campaigns held in Darwin, Australia (2014), and Cayenne, France (2015), in the frame of the High Altitude Ice Crystals (HAIC) / High Ice Water Content (HIWC) projects. During these campaigns, a Falcon F-20 research aircraft equipped with state-of-the art microphysical instrumentation sampled numerous mesoscale convective systems (MCS) in order to study dynamical and microphysical properties and processes of high ice water content areas. Experimentally, an isokinetic evaporator probe, referred to as IKP-2, provides a reference measurement of the total water content (TWC) which equals ice water content, (IWC) when (supercooled) liquid water is absent. Two optical array probes, namely 2D-S and PIP, produce 2D images of individual crystals ranging from 50 μm to 12840 μm from which particle size distributions (PSD) are derived. Mathematically, the problem is formulated as an inverse problem in which the crystals' mass is assumed constant over a size class and is computed for each size class from IWC and PSD data: PSD.m = IW C This problem is solved using numerical optimization technique in which an objective function is minimized. The objective function is defined as follows: 2 J(m)=∥P SD.m - IW C ∥ + λ.R (m) where the regularization parameter λ and the regularization function R(m) are tuned based on data characteristics. The method is implemented in two steps. First, the method is developed on synthetic crystal populations in order to evaluate the behavior of the iterative algorithm, the influence of data noise on the quality of the results, and to set up a regularization strategy. Therefore, 3D synthetic crystals have been generated and numerically processed to recreate the noise caused by 2D projections of randomly oriented 3D crystals and by the discretization of the PSD into size classes of predefined width. Subsequently, the method is applied to the experimental datasets and the comparison between the retrieved TWC (this methodology) and the measured ones (IKP-2 data) will enable the evaluation of the consistency and accuracy of the mass solution retrieved by the numerical optimization approach as well as preliminary assessment of the influence of temperature and dynamical parameters on crystals' masses.
Richard, Gontran; Touhami, Seddik; Zeghloul, Thami; Dascalescu, Lucien
2017-02-01
Plate-type electrostatic separators are commonly employed for the selective sorting of conductive and non-conductive granular materials. The aim of this work is to identify the optimal operating conditions of such equipment, when employed for separating copper and plastics from either flexible or rigid electric wire wastes. The experiments are performed according to the response surface methodology, on samples composed of either "calibrated" particles, obtained by manually cutting of electric wires at a predefined length (4mm), or actual machine-grinded scraps, characterized by a relatively-wide size distribution (1-4mm). The results point out the effect of particle size and shape on the effectiveness of the electrostatic separation. Different optimal operating conditions are found for flexible and rigid wires. A separate processing of the two classes of wire wastes is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lycett, Kristen A; Chung, J Sook; Pitula, Joseph S
2018-01-01
In the blue crab, Callinectes sapidus, early studies suggested a relationship between smaller crabs, which molt more frequently, and higher rates of infection by the dinoflagellate parasite, Hematodinium perezi. In order to better explore the influence of size and molting on infections, blue crabs were collected from the Maryland coastal bays and screened for the presence of H. perezi in hemolymph samples using a quantitative PCR assay. Molt stage was determined by a radioimmunoassay which measured ecdysteroid concentrations in blue crab hemolymph. Differences were seen in infection prevalence between size classes, with the medium size class (crabs 61 to 90 mm carapace width) and juvenile crabs (≤ 30 mm carapace width) having the highest infection prevalence at 47.2% and 46.7%, respectively. All size classes were susceptible to infection, although fall months favored disease acquisition by juveniles, whereas mid-sized animals (31-90 mm carapace width) acquired infection predominantly in summer. Disease intensity was also most pronounced in the summer, with blue crabs > 61 mm being primary sources of proliferation. Molt status appeared to be influenced by infection, with infected crabs having significantly lower concentrations of ecdysteroids than uninfected crabs in the spring and the fall. We hypothesize that infection by H. perezi may increase molt intervals, with a delay in the spring molt cycle as an evolutionary adaptation functioning to coincide with increased host metabolism, providing optimal conditions for H. perezi propagation. Regardless of season, postmolt crabs harbored significantly higher proportions of moderate and heavy infections, suggesting that the process of ecdysis, and the postmolt recovery period, has a positive effect on parasite proliferation.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Evolutionary pattern search algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimentalmore » analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.« less
Trophic Strategies of Unicellular Plankton.
Chakraborty, Subhendu; Nielsen, Lasse Tor; Andersen, Ken H
2017-04-01
Unicellular plankton employ trophic strategies ranging from pure photoautotrophs over mixotrophy to obligate heterotrophs (phagotrophs), with cell sizes from 10 -8 to 1 μg C. A full understanding of how trophic strategy and cell size depend on resource environment and predation is lacking. To this end, we develop and calibrate a trait-based model for unicellular planktonic organisms characterized by four traits: cell size and investments in phototrophy, nutrient uptake, and phagotrophy. We use the model to predict how optimal trophic strategies depend on cell size under various environmental conditions, including seasonal succession. We identify two mixotrophic strategies: generalist mixotrophs investing in all three investment traits and obligate mixotrophs investing only in phototrophy and phagotrophy. We formulate two conjectures: (1) most cells are limited by organic carbon; however, small unicellulars are colimited by organic carbon and nutrients, and only large photoautotrophs and smaller mixotrophs are nutrient limited; (2) trophic strategy is bottom-up selected by the environment, while optimal size is top-down selected by predation. The focus on cell size and trophic strategies facilitates general insights into the strategies of a broad class of organisms in the size range from micrometers to millimeters that dominate the primary and secondary production of the world's oceans.
Li, Meng; Alvarez, Paulina; Bilgili, Ecevit
2017-05-30
Although wet stirred media milling has proven to be a robust process for producing nanoparticle suspensions of poorly water-soluble drugs and thereby enhancing their bioavailability, selection of bead size has been largely empirical, lacking fundamental rationale. This study aims to establish such rationale by investigating the impact of bead size at various stirrer speeds on the drug breakage kinetics via a microhydrodynamic model. To this end, stable suspensions of griseofulvin, a model BCS Class II drug, were prepared using hydroxypropyl cellulose and sodium dodecyl sulfate. The suspensions were milled at four different stirrer speeds (1000-4000rpm) using various sizes (50-1500μm) of zirconia beads. Laser diffraction, SEM, and XRPD were used for characterization. Our results suggest that there is an optimal bead size that achieves fastest breakage at each stirrer speed and that it shifts to a smaller size at higher speed. Calculated microhydrodynamic parameters reveal two counteracting effects of bead size: more bead-bead collisions with less energy/force upon a decrease in bead size. The optimal bead size exhibits a negative power-law correlation with either specific energy consumption or the microhydrodynamic parameters. Overall, this study rationalizes the use of smaller beads for more energetic wet media milling. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Olds, John Robert; Walberg, Gerald D.
1993-01-01
Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are determined for the vehicle. A summary and evaluation of the various parametric MDO methods employed in the research are included. Recommendations for additional research are provided.
Leite, Marcos de Miranda Leão; Rezende, Carla Ferreira; Silva, José Roberto Feitosa
2013-12-01
The mangrove crab Ucides cordatus is an important resource of estuarine regions along the Brazilian coast. U. cordatus is distributed from Florida, U.S.A., to the coast of Santa Catarina, Brazil. The species plays an important role in processing leaf litter in the mangroves, which optimizes the processes of energy transfer and nutrient cycling, and is considered a keystone species in the ecosystem. Population declines have been reported in different parts of the Brazilian coast. In the present study we evaluated aspects of the population structure, sex ratio and size at morphological sexual maturity. We analyzed 977 specimens collected monthly over 24 months (2010-2012), in a mangrove of the Jaguaribe River, in the municipality of Aracati on the East coast of Ceará state, Northeastern Brazil. The study area has a mild semiarid tropical climate, with mean temperatures between 26 and 28 degrees C. The area is located within the eco-region of the semiarid Northeast coast, where mangroves occur in small areas and estuaries are affected by mesomareal regimes. The population structure was evaluated by the frequency distribution of size classes in each month, and the overall sex ratio was analyzed using the chi-square test. Size at morphological sexual maturity was estimated based on the allometry of the cheliped of the males and the abdomen width of the females, using the program REGRANS. The size-frequency distribution was unimodal in both sexes. The overall sex ratio (M:F) (1:0.6) was significantly different from 1:1. Analysis of the sex ratio by size class showed that the proportion of males increased significantly from size class 55-60 mm upward, and this pattern persisted in the larger size classes. In the smaller size classes the sex ratio did not differ from 1:1. The size at morphological sexual maturity was estimated at a carapace width (CW) of 52 mm and 45 mm for males and females, respectively. Analysis of the population parameters indicated that the population of U. cordatus in the Jaguaribe River mangrove is stable. However, constant monitoring of the population is required to detect any changes in the population attributes that may affect this stability.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Six-Degree-of-Freedom Trajectory Optimization Utilizing a Two-Timescale Collocation Architecture
NASA Technical Reports Server (NTRS)
Desai, Prasun N.; Conway, Bruce A.
2005-01-01
Six-degree-of-freedom (6DOF) trajectory optimization of a reentry vehicle is solved using a two-timescale collocation methodology. This class of 6DOF trajectory problems are characterized by two distinct timescales in their governing equations, where a subset of the states have high-frequency dynamics (the rotational equations of motion) while the remaining states (the translational equations of motion) vary comparatively slowly. With conventional collocation methods, the 6DOF problem size becomes extraordinarily large and difficult to solve. Utilizing the two-timescale collocation architecture, the problem size is reduced significantly. The converged solution shows a realistic landing profile and captures the appropriate high-frequency rotational dynamics. A large reduction in the overall problem size (by 55%) is attained with the two-timescale architecture as compared to the conventional single-timescale collocation method. Consequently, optimum 6DOF trajectory problems can now be solved efficiently using collocation, which was not previously possible for a system with two distinct timescales in the governing states.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles
Sarwar, A.; Nemirovski, A.; Shapiro, B.
2011-01-01
Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell’s equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm3 volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm3), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths. PMID:23335834
Optimal Halbach Permanent Magnet Designs for Maximally Pulling and Pushing Nanoparticles.
Sarwar, A; Nemirovski, A; Shapiro, B
2012-03-01
Optimization methods are presented to design Halbach arrays to maximize the forces applied on magnetic nanoparticles at deep tissue locations. In magnetic drug targeting, where magnets are used to focus therapeutic nanoparticles to disease locations, the sharp fall off of magnetic fields and forces with distances from magnets has limited the depth of targeting. Creating stronger forces at depth by optimally designed Halbach arrays would allow treatment of a wider class of patients, e.g. patients with deeper tumors. The presented optimization methods are based on semi-definite quadratic programming, yield provably globally optimal Halbach designs in 2 and 3-dimensions, for maximal pull or push magnetic forces (stronger pull forces can collect nano-particles against blood forces in deeper vessels; push forces can be used to inject particles into precise locations, e.g. into the inner ear). These Halbach designs, here tested in simulations of Maxwell's equations, significantly outperform benchmark magnets of the same size and strength. For example, a 3-dimensional 36 element 2000 cm(3) volume optimal Halbach design yields a ×5 greater force at a 10 cm depth compared to a uniformly magnetized magnet of the same size and strength. The designed arrays should be feasible to construct, as they have a similar strength (≤ 1 Tesla), size (≤ 2000 cm(3)), and number of elements (≤ 36) as previously demonstrated arrays, and retain good performance for reasonable manufacturing errors (element magnetization direction errors ≤ 5°), thus yielding practical designs to improve magnetic drug targeting treatment depths.
Ding, Yongxia; Zhang, Peili
2018-06-12
Problem-based learning (PBL) is an effective and highly efficient teaching approach that is extensively applied in education systems across a variety of countries. This study aimed to investigate the effectiveness of web-based PBL teaching pedagogies in large classes. The cluster sampling method was used to separate two college-level nursing student classes (graduating class of 2013) into two groups. The experimental group (n = 162) was taught using a web-based PBL teaching approach, while the control group (n = 166) was taught using conventional teaching methods. We subsequently assessed the satisfaction of the experimental group in relation to the web-based PBL teaching mode. This assessment was performed following comparison of teaching activity outcomes pertaining to exams and self-learning capacity between the two groups. When compared with the control group, the examination scores and self-learning capabilities were significantly higher in the experimental group (P < 0.01) compared with the control group. In addition, 92.6% of students in the experimental group expressed satisfaction with the new web-based PBL teaching approach. In a large class-size teaching environment, the web-based PBL teaching approach appears to be more optimal than traditional teaching methods. These results demonstrate the effectiveness of web-based teaching technologies in problem-based learning. Copyright © 2018. Published by Elsevier Ltd.
Shah, Nirmal; Seth, Avinashkumar; Balaraman, R; Sailor, Girish; Javia, Ankur; Gohil, Dipti
2018-04-01
The objective of this work was to utilize a potential of microemulsion for the improvement in oral bioavailability of raloxifene hydrochloride, a BCS class-II drug with 2% bioavailability. Drug-loaded microemulsion was prepared by water titration method using Capmul MCM C8, Tween 20, and Polyethylene glycol 400 as oil, surfactant, and co-surfactant respectively. The pseudo-ternary phase diagram was constructed between oil and surfactants mixture to obtain appropriate components and their concentration ranges that result in large existence area of microemulsion. D-optimal mixture design was utilized as a statistical tool for optimization of microemulsion considering oil, S mix , and water as independent variables with percentage transmittance and globule size as dependent variables. The optimized formulation showed 100 ± 0.1% transmittance and 17.85 ± 2.78 nm globule size which was identically equal with the predicted values of dependent variables given by the design expert software. The optimized microemulsion showed pronounced enhancement in release rate compared to plain drug suspension following diffusion controlled release mechanism by the Higuchi model. The formulation showed zeta potential of value -5.88 ± 1.14 mV that imparts good stability to drug loaded microemulsion dispersion. Surface morphology study with transmission electron microscope showed discrete spherical nano sized globules with smooth surface. In-vivo pharmacokinetic study of optimized microemulsion formulation in Wistar rats showed 4.29-fold enhancements in bioavailability. Stability study showed adequate results for various parameters checked up to six months. These results reveal the potential of microemulsion for significant improvement in oral bioavailability of poorly soluble raloxifene hydrochloride.
Complete exchange on the iPSC-860
NASA Technical Reports Server (NTRS)
Bokhari, Shahid H.
1991-01-01
The implementation of complete exchange on the circuit switched Intel iPSC-860 hypercube is described. This pattern, also known as all-to-all personalized communication, is the densest requirement that can be imposed on a network. On the iPSC-860, care needs to be taken to avoid edge contention, which can have a disastrous impact on communication time. There are basically two classes of algorithms that achieve contention-free complete exchange. The first contains the classical standard exchange algorithm that is generally useful for small message sizes. The second includes a number of optimal or near-optimal algorithms that are best for large messages. Measurement of communication overhead on the iPSC-860 are given and a notation for analyzing communication link usage is developed. It is shown that for the two classes of algorithms, there is substantial variation in performance with synchronization technique and choice of message protocol. Timings of six implementations are given; each of these is useful over a particular range of message size and cube dimension. Since the complete exchange is a superset of communication patterns, these timings represent upper bounds on the time required by an arbitrary communication requirement. These results indicate that the programmer needs to evaluate several possibilities before finalizing an implementation - a careful choice can lead to very significant savings in time.
In vivo RF powering for advanced biological research.
Zimmerman, Mark D; Chaimanonart, Nattapon; Young, Darrin J
2006-01-01
An optimized remote powering architecture with a miniature and implantable RF power converter for an untethered small laboratory animal inside a cage is proposed. The proposed implantable device exhibits dimensions less than 6 mmx6 mmx1 mm, and a mass of 100 mg including a medical-grade silicon coating. The external system consists of a Class-E power amplifier driving a tuned 15 cmx25 cm external coil placed underneath the cage. The implant device is located in the animal's abdomen in a plane parallel to the external coil and utilizes inductive coupling to receive power from the external system. A half-wave rectifier rectifies the received AC voltage and passes the resulting DC current to a 2.5 kOmega resistor, which represents the loading of an implantable microsystem. An optimal operating point with respect to operating frequency and number of turns in each coil inductor was determined by analyzing the system efficiency. The determined optimal operating condition is based on a 4-turn external coil and a 20-turn internal coil operating at 4 MHz. With the Class-E amplifier consuming a constant power of 25 W, this operating condition is sufficient to supply a desired 3.2 V with 1.3 mA to the load over a cage size of 10 cmx20 cm with an animal tilting angle of up to 60 degrees, which is the worst case considered for the prototype design. A voltage regulator can be designed to regulate the received DC power to a stable supply for the bio-implant microsystem.
Class-Size Effects in Secondary School
ERIC Educational Resources Information Center
Krassel, Karl Fritjof; Heinesen, Eskil
2014-01-01
We analyze class-size effects on academic achievement in secondary school in Denmark exploiting an institutional setting where pupils cannot predict class size prior to enrollment, and where post-enrollment responses aimed at affecting realized class size are unlikely. We identify class-size effects combining a regression discontinuity design with…
Anderson, RaeAnn E; Hruska, Bryce; Boros, Alec P; Richardson, Christopher J; Delahanty, Douglas L
2018-03-01
Poly-substance use and psychiatric comorbidity are common among individuals receiving substance detoxification services. Posttraumatic stress disorder (PTSD) and major depressive disorder (MDD) are the most common co-occurring psychiatric disorders with substance use disorder (SUD). Current treatment favors a one-size-fits-all approach to treating addiction focusing on one substance or one comorbidity. Research examining patterns of substance use and comorbidities can inform efforts to effectively identify and differentially treat individuals with co-occurring conditions. Using latent class analysis, the current study identified four patterns of PTSD, MDD, and substance use among 375 addiction treatment seekers receiving medically supervised detoxification. The four identified classes were: 1) a PTSD-MDD-Poly SUD class characterized by PTSD and MDD occurring in the context of opioid, cannabis, and tobacco use disorders; 2) an MDD-Poly SUD class characterized by MDD and alcohol, opioid, tobacco, and cannabis use disorders; 3) an alcohol-tobacco class characterized by alcohol and tobacco use disorders; and 4) an opioid-tobacco use disorder class characterized by opioid and tobacco use disorders. The observed classes differed on gender and clinical characteristics including addiction severity, trauma history, and PTSD/MDD symptom severity. The observed classes likely require differing treatment approaches. For example, people in the PTSD-MDD-Poly SUD class would likely benefit from treatment approaches targeting anxiety sensitivity and distress tolerance, while the opioid-tobacco class would benefit from treatments that incorporate motivational interviewing. Appropriate matching of treatment to class could optimize treatment outcomes for polysubstance and comorbid psychiatric treatment seekers. These findings also underscore the importance of well-developed referral networks to optimize outpatient psychotherapy for detoxification treatment-seekers to enhance long-term recovery, particularly those that include transdiagnostic treatment components. Copyright © 2017. Published by Elsevier Inc.
A high-efficiency low-voltage class-E PA for IoT applications in sub-1 GHz frequency range
NASA Astrophysics Data System (ADS)
Zhou, Chenyi; Lu, Zhenghao; Gu, Jiangmin; Yu, Xiaopeng
2017-10-01
We present and propose a complete and iterative integrated-circuit and electro-magnetic (EM) co-design methodology and procedure for a low-voltage sub-1 GHz class-E PA. The presented class-E PA consists of the on-chip power transistor, the on-chip gate driving circuits, the off-chip tunable LC load network and the off-chip LC ladder low pass filter. The design methodology includes an explicit design equation based circuit components values' analysis and numerical derivation, output power targeted transistor size and low pass filter design, and power efficiency oriented design optimization. The proposed design procedure includes the power efficiency oriented LC network tuning, the detailed circuit/EM co-simulation plan on integrated circuit level, package level and PCB level to ensure an accurate simulation to measurement match and first pass design success. The proposed PA is targeted to achieve more than 15 dBm output power delivery and 40% power efficiency at 433 MHz frequency band with 1.5 V low voltage supply. The LC load network is designed to be off-chip for the purpose of easy tuning and optimization. The same circuit can be extended to all sub-1 GHz applications with the same tuning and optimization on the load network at different frequencies. The amplifier is implemented in 0.13 μm CMOS technology with a core area occupation of 400 μm by 300 μm. Measurement results showed that it provided power delivery of 16.42 dBm at antenna with efficiency of 40.6%. A harmonics suppression of 44 dBc is achieved, making it suitable for massive deployment of IoT devices. Project supported by the National Natural Science Foundation of China (No. 61574125) and the Industry Innovation Project of Suzhou City of China (No. SYG201641).
Kupinski, M. K.; Clarkson, E.
2015-01-01
We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Phase-space interference in extensive and nonextensive quantum heat engines
NASA Astrophysics Data System (ADS)
Hardal, Ali Ü. C.; Paternostro, Mauro; Müstecaplıoǧlu, Özgür E.
2018-04-01
Quantum interference is at the heart of what sets the quantum and classical worlds apart. We demonstrate that quantum interference effects involving a many-body working medium is responsible for genuinely nonclassical features in the performance of a quantum heat engine. The features with which quantum interference manifests itself in the work output of the engine depends strongly on the extensive nature of the working medium. While identifying the class of work substances that optimize the performance of the engine, our results shed light on the optimal size of such media of quantum workers to maximize the work output and efficiency of quantum energy machines.
Quantum money with nearly optimal error tolerance
NASA Astrophysics Data System (ADS)
Amiri, Ryan; Arrazola, Juan Miguel
2017-06-01
We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.
A test of reproductive power in snakes.
Boback, Scott M; Guyer, Craig
2008-05-01
Reproductive power is a contentious concept among ecologists, and the model has been criticized on theoretical and empirical grounds. Despite these criticisms, the model has successfully predicted the modal (optimal) size in three large taxonomic groups and the shape of the body size distribution in two of these groups. We tested the reproductive power model on snakes, a group that differs markedly in physiology, foraging ecology, and body shape from the endothermic groups upon which the model was derived. Using detailed field data from the published literature, snake-specific constants associated with reproductive power were determined using allometric relationships of energy invested annually in egg production and population productivity. The resultant model accurately predicted the mode and left side of the size distribution for snakes but failed to predict the right side of that distribution. If the model correctly describes what is possible in snakes, observed size diversity is limited, especially in the largest size classes.
ERIC Educational Resources Information Center
Bonesronning, Hans
2004-01-01
The present paper supplements the traditional class size literature by exploring the causal relationship between class size and parental effort in education production. Class size variation that is exogenous to parental effort comes from interaction between enrollment and a maximum class size rule of 30 students in the lower secondary school in…
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
NASA Astrophysics Data System (ADS)
Ma, Lei; Cheng, Liang; Li, Manchun; Liu, Yongxue; Ma, Xiaoxue
2015-04-01
Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.
Class Size and Education in England Evidence Report. Research Report. DFE-RR169
ERIC Educational Resources Information Center
Department for Education, 2011
2011-01-01
This report gives an overview of the existing evidence base on class size and education in England. In particular, it considers how class sizes have changed over time; the impact of the increase in birth rate on pupil numbers and how this could affect the teacher requirement and class sizes; and the impact of class size on educational outcomes.…
Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G
2014-01-01
As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Lowe, Michael R.; Sehlinger, Troy; Soniat, Thomas M.; LaPeyre, Megan K.
2017-01-01
Despite nearly a century of exploitation and scientific study, predicting growth and mortality rates of the eastern oyster (Crassostrea virginica) as a means to inform local harvest and management activities remains difficult. Ensuring that models reflect local population responses to varying salinity and temperature combinations requires locally appropriate models. Using long-term (1988 to 2015) monitoring data from Louisiana's public oyster reefs, we develop regionally specific models of temperature- and salinity-driven mortality (sack oysters only) and growth for spat (<25 mm), seed (25–75 mm), and sack (>75 mm) oyster size classes. The results demonstrate that the optimal combination of temperature and salinity where Louisiana oysters experience reduced mortality and fast growth rates is skewed toward lower salinities and higher water temperatures than previous models have suggested. Outside of that optimal range, oysters are commonly exposed to combinations of temperature and salinity that are correlated with high mortality and reduced growth. How these combinations affect growth, and to a lesser degree mortality, appears to be size class dependent. Given current climate predictions for the region and ongoing large-scale restoration activities in coastal Louisiana, the growth and mortality models are a critical step toward ensuring sustainable oyster reefs for long-term harvest and continued delivery of the ecological services in a changing environment.
NASA Technical Reports Server (NTRS)
Burrows, R. R.
1972-01-01
A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.
ERIC Educational Resources Information Center
Bettinger, Eric P.; Long, Bridget Terry
2018-01-01
This paper measures the effects of collegiate class size on college retention and graduation. Class size is a perennial issue in research on primary and secondary schooling. Few researchers have focused on the causal impacts of collegiate class size, however. Whereas college students have greater choice of classes, selection problems and nonrandom…
ERIC Educational Resources Information Center
Laine, Sabrina W. M., Ed.; Ward, James G., Ed.
This book contains a collection of essays involving new research on class-size reduction. Six chapters include: (1) "Reducing Class Size in Public Schools: Cost-Benefit Issues and Implications" (John F. Witte); (2) "Making Policy Choices: Is Class-Size Reduction the Best Alternative?" (Doug Harris and David N. Plank); (3) "Smaller Classes, Lower…
ERIC Educational Resources Information Center
Cho, Hyunkuk; Glewwe, Paul; Whitler, Melissa
2012-01-01
Many U.S. states and cities spend substantial funds to reduce class size, especially in elementary (primary) school. Estimating the impact of class size on learning is complicated, since children in small and large classes differ in many observed and unobserved ways. This paper uses a method of Hoxby (2000) to assess the impact of class size on…
ERIC Educational Resources Information Center
Biddle, Bruce J.; Berliner, David C.
Interest in class size is widespread today. Debates often take place about "ideal" class size. Controversial efforts to reduce class size have appeared at both the federal level and in various states around the nation. This paper reviews research on class size and discusses findings, how these findings can be explained, and policy implications.…
Do Class Size Effects Differ across Grades?
ERIC Educational Resources Information Center
Nandrup, Anne Brink
2016-01-01
This paper contributes to the class size literature by analysing whether short-run class size effects are constant across grade levels in compulsory school. Results are based on administrative data on all pupils enrolled in Danish public schools. Identification is based on a government-imposed class size cap that creates exogenous variation in…
The Synergy of Class Size Reduction and Classroom Quality
ERIC Educational Resources Information Center
Graue, Elizabeth; Rauscher, Erica; Sherfinski, Melissa
2009-01-01
A contextual approach to understanding class size reduction includes attention to both educational inputs and processes. Based on our study of a class size reduction program in Wisconsin we explore the following question: How do class size reduction and classroom quality interact to produce learning opportunities in early elementary classrooms? To…
Online Class Size, Note Reading, Note Writing and Collaborative Discourse
ERIC Educational Resources Information Center
Qiu, Mingzhu; Hewitt, Jim; Brett, Clare
2012-01-01
Researchers have long recognized class size as affecting students' performance in face-to-face contexts. However, few studies have examined the effects of class size on exact reading and writing loads in online graduate-level courses. This mixed-methods study examined relationships among class size, note reading, note writing, and collaborative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
Kinetic turbulence simulations at extreme scale on leadership-class systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Bei; Ethier, Stephane; Tang, William
2013-01-01
Reliable predictive simulation capability addressing confinement properties in magnetically confined fusion plasmas is critically-important for ITER, a 20 billion dollar international burning plasma device under construction in France. The complex study of kinetic turbulence, which can severely limit the energy confinement and impact the economic viability of fusion systems, requires simulations at extreme scale for such an unprecedented device size. Our newly optimized, global, ab initio particle-in-cell code solving the nonlinear equations underlying gyrokinetic theory achieves excellent performance with respect to "time to solution" at the full capacity of the IBM Blue Gene/Q on 786,432 cores of Mira at ALCFmore » and recently of the 1,572,864 cores of Sequoia at LLNL. Recent multithreading and domain decomposition optimizations in the new GTC-P code represent critically important software advances for modern, low memory per core systems by enabling routine simulations at unprecedented size (130 million grid points ITER-scale) and resolution (65 billion particles).« less
Improving oral bioavailability of acyclovir using nanoparticulates of thiolated xyloglucan.
Madgulkar, Ashwini; Bhalekar, Mangesh R; Dikpati, Amrita A
2016-08-01
Acyclovir a BCS class III drug exhibits poor bioavailability due to limited permeability. The intention of this research work was to formulate and characterize thiolated xyloglucan polysaccharide nanoparticles (TH-NPs) of acyclovir with the purpose of increasing its oral bioavailability. Acyclovir-loaded TH-NPs were prepared using a cross-linking agent. Interactions of formulation excipients were reconnoitered using Fourier transform infrared spectroscopy (FT-IR). The formulated nanoparticles were lyophilised by the addition of a cryoprotectant and characterized for its particle size, morphology and stability and optimized using Box Behnken Design.The optimized TH-NP formulation exhibited particle size of 474.4±2.01 and an entrapment efficiency of 81.57%. A marked enhancement in the mucoadhesion was also observed. In-vivo study in a rat model proved that relative bioavailability of acyclovir TH-NPs is ∼2.575 fold greater than that of the marketed acyclovir drug suspension. Copyright © 2016 Elsevier B.V. All rights reserved.
Satellite Telemetry and Long-Range Bat Movements
Smith, Craig S.; Epstein, Jonathan H.; Breed, Andrew C.; Plowright, Raina K.; Olival, Kevin J.; de Jong, Carol; Daszak, Peter; Field, Hume E.
2011-01-01
Background Understanding the long-distance movement of bats has direct relevance to studies of population dynamics, ecology, disease emergence, and conservation. Methodology/Principal Findings We developed and trialed several collar and platform terminal transmitter (PTT) combinations on both free-living and captive fruit bats (Family Pteropodidae: Genus Pteropus). We examined transmitter weight, size, profile and comfort as key determinants of maximized transmitter activity. We then tested the importance of bat-related variables (species size/weight, roosting habitat and behavior) and environmental variables (day-length, rainfall pattern) in determining optimal collar/PTT configuration. We compared battery- and solar-powered PTT performance in various field situations, and found the latter more successful in maintaining voltage on species that roosted higher in the tree canopy, and at lower density, than those that roost more densely and lower in trees. Finally, we trialed transmitter accuracy, and found that actual distance errors and Argos location class error estimates were in broad agreement. Conclusions/Significance We conclude that no single collar or transmitter design is optimal for all bat species, and that species size/weight, species ecology and study objectives are key design considerations. Our study provides a strategy for collar and platform choice that will be applicable to a larger number of bat species as transmitter size and weight continue to decrease in the future. PMID:21358823
Classification of the Gabon SAR Mosaic Using a Wavelet Based Rule Classifier
NASA Technical Reports Server (NTRS)
Simard, Marc; Saatchi, Sasan; DeGrandi, Gianfranco
2000-01-01
A method is developed for semi-automated classification of SAR images of the tropical forest. Information is extracted using the wavelet transform (WT). The transform allows for extraction of structural information in the image as a function of scale. In order to classify the SAR image, a Desicion Tree Classifier is used. The method of pruning is used to optimize classification rate versus tree size. The results give explicit insight on the type of information useful for a given class.
Latent classes of resilience and psychological response among only-child loss parents in China.
Wang, An-Ni; Zhang, Wen; Zhang, Jing-Ping; Huang, Fei-Fei; Ye, Man; Yao, Shu-Yu; Luo, Yuan-Hui; Li, Zhi-Hua; Zhang, Jie; Su, Pan
2017-10-01
Only-child loss parents in China recently gained extensive attention as a newly defined social group. Resilience could be a probable solution out of the psychological dilemma. Using a sample of 185 only-child loss people, this study employed latent class analysis (a) to explore whether different classes of resilience could be identified, (b) to determine socio-demographic characteristics of each class, and (c) to compare the depression and the subjective well-being of each class. The results supported a three-class solution, defined as 'high tenacity-strength but moderate optimism class', 'moderate resilience but low self-efficacy class' and 'low tenacity but moderate adaption-dependence class'. Parents with low income and medical insurance of low reimbursement type and without endowment insurance occupied more proportions in the latter two classes. The latter two classes also had a significant higher depression scores and lower subjective well-being scores than high tenacity-strength but moderate optimism class. Future work should care those socio-economically vulnerable bereaved parents, and an elastic economic assistance policy was needed. To develop targeted resilience interventions, the emphasis of high tenacity-strength but moderate optimism class should be the optimism. Moderate resilience but low self-efficacy class should be self-efficacy, and low tenacity but moderate adaption-dependence class should be tenacity. Copyright © 2016 John Wiley & Sons, Ltd.
Class Size Effects on Fourth-Grade Mathematics Achievement: Evidence from TIMSS 2011
ERIC Educational Resources Information Center
Li, Wei; Konstantopoulos, Spyros
2016-01-01
Class size reduction policies have been widely implemented around the world in recent years. However, findings about the effects of class size on student achievement have been mixed. This study examines class size effects on fourth-grade mathematics achievement in 14 European countries using data from TIMSS (Trends in International Mathematics and…
Making Sense of Continuing and Renewed Class-Size Findings and Interest.
ERIC Educational Resources Information Center
Achilles, C. M.; Finn, J. D.
In this paper, the authors examine several factors related to class size. The purpose of the presentation is to: (1) trace the evolution of class-size research; (2) briefly describe the Student Achievement Ratio (STAR) class-size experiment; (3) summarize the early and the later student outcomes of STAR participants; (4) outline the…
ERIC Educational Resources Information Center
Bascia, Nina; Faubert, Brenton
2012-01-01
This article reviews the literature base on class size reduction and proposes a new analytic framework that we believe provides practically useful explanations of how primary class size reduction works. It presents descriptions of classroom practice and grounded explanations for how class size reduction affects educational core activities by…
Class Size Effects on Mathematics Achievement in Cyprus: Evidence from TIMSS
ERIC Educational Resources Information Center
Konstantopoulos, Spyros; Shen, Ting
2016-01-01
Class size reduction has been viewed as one school mechanism that can improve student achievement. Nonetheless, the literature has reported mixed findings about class size effects. We used 4th- and 8th-grade data from TIMSS 2003 and 2007 to examine the association between class size and mathematics achievement in public schools in Cyprus. We…
ERIC Educational Resources Information Center
Achilles, Charles M.
2012-01-01
This brief summarizes findings on class size from over 25 years of work on the Tennessee Student Teacher Achievement Ratio (STAR) randomized, longitudinal experiment, and other Class-Size Reduction (CSR) studies throughout the United States, Australia, Hong Kong, Sweden, Great Britain, and elsewhere. The brief concludes with recommendations. The…
ERIC Educational Resources Information Center
Levy, Mike; Kennedy, Claire
2010-01-01
This paper considers the design and development of CALL materials with the aim of achieving an optimal mix between in-class and out-of-class learning in the context of teaching Italian at an Australian university. The authors discuss three projects in relation to the following themes: (a) conceptions of the in-class/out-of-class relationship, (b)…
NASA Astrophysics Data System (ADS)
Jiang, Li; Shi, Tielin; Xuan, Jianping
2012-05-01
Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.
Class Size: What Research Says and What It Means for State Policy
ERIC Educational Resources Information Center
Whitehurst, Grover J.; Chingos, Matthew M.
2011-01-01
Class size is one of the small number of variables in American K-12 education that are both thought to influence student learning and are subject to legislative action. Legislative mandates on maximum class size have been very popular at the state level. In recent decades, at least 24 states have mandated or incentivized class-size reduction…
Review of "Class Size: What Research Says and What It Means for State Policy"
ERIC Educational Resources Information Center
Whitmore Schanzenbach, Diane
2011-01-01
"Class Size: What Research Says and What It Means for State Policy" argues that increasing average class size by one student will save about 2% of total education spending with negligible impact on academic achievement. It justifies this conclusion on the basis that Class-Size Reduction (CSR) is not particularly effective and is not as…
Optimal processor assignment for pipeline computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath
1991-01-01
The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.
New NAS Parallel Benchmarks Results
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)
1997-01-01
NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.
Hysteretic sediment fluxes in rainfall-driven soil erosion
NASA Astrophysics Data System (ADS)
Cheraghi, Mohsen; Jomaa, Seifeddine; Sander, Graham C.; Barry, D. Andrew
2017-04-01
Hysteresis patterns of different sediment particle sizes were studied via a detailed laboratory study and modelling. Seven continuous rainfall events with stepwise- varying rainfall intensities (30, 37.5, 45, 60, 45, 37.5 and 30 mm h-1, each 20 min duration) were conducted using a 5-m × 2-m erosion flume. Flow rates and sediment concentration data were measured using flume discharge samples, and interpreted using the Hairsine and Rose (HR) soil erosion model. The total sediment concentration and concentrations of seven particle size classes (< 2, 2-20, 20-50, 50-100, 100-315, 315-1000 and > 1000 μm) were measured. For the total eroded soil and the finer particle sizes (< 2, 2-20 and 20-50 μm), there was a clockwise pattern in the sediment concentration versus discharge curves. However, as the particle size increased, concentrations tended to vary linearly with discharge. The HR model predictions for the total eroded soil and the finer particle size classes (up to 100 μm) were in good agreement with the experimental results. For the larger particles, the model provided qualitative agreement with the measurements but concentration values were different. In agreement with previous investigations using the HR model, these differences were attributed to the HR model's assumption of suspended sediment flow, which does not account for saltation and rolling motions. Keywords: Hysteresis effects, Sediment transport, Flume experiment, Splash soil erosion, Hairsine and Rose model, Particle Swarm Optimization.
NASA Astrophysics Data System (ADS)
Sun, Deyong; Huan, Yu; Qiu, Zhongfeng; Hu, Chuanmin; Wang, Shengqiang; He, Yijun
2017-10-01
Phytoplankton size class (PSC), a measure of different phytoplankton functional and structural groups, is a key parameter to the understanding of many marine ecological and biogeochemical processes. In turbid waters where optical properties may be influenced by terrigenous discharge and nonphytoplankton water constituents, remote estimation of PSC is still a challenging task. Here based on measurements of phytoplankton diagnostic pigments, total chlorophyll a, and spectral reflectance in turbid waters of Bohai Sea and Yellow Sea during summer 2015, a customized model is developed and validated to estimate PSC in the two semienclosed seas. Five diagnostic pigments determined through high-performance liquid chromatography (HPLC) measurements are first used to produce weighting factors to model phytoplankton biomass (using total chlorophyll a as a surrogate) with relatively high accuracies. Then, a common method used to calculate contributions of microphytoplankton, nanophytoplankton, and picophytoplankton to the phytoplankton assemblage (i.e., Fm, Fn, and Fp) is customized using local HPLC and other data. Exponential functions are tuned to model the size-specific chlorophyll a concentrations (Cm, Cn, and Cp for microphytoplankton, nanophytoplankton, and picophytoplankton, respectively) with remote-sensing reflectance (Rrs) and total chlorophyll a as the model inputs. Such a PSC model shows two improvements over previous models: (1) a practical strategy (i.e., model Cp and Cn first, and then derive Cm as C-Cp-Cn) with an optimized spectral band (680 nm) for Rrs as the model input; (2) local parameterization, including a local chlorophyll a algorithm. The performance of the PSC model is validated using in situ data that were not used in the model development. Application of the PSC model to GOCI (Geostationary Ocean Color Imager) data leads to spatial and temporal distribution patterns of phytoplankton size classes (PSCs) that are consistent with results reported from field measurements by other researchers. While the applicability of the PSC model together with its parameterization to other optically complex regions and to other seasons is unknown, the findings of this study suggest that the approach to develop such a model may be extendable to other cases as long as local data are used to select the optimal band and to determine the model coefficients.
Impact of Company Size on Manufacturing Improvement Practices: An empirical study
NASA Astrophysics Data System (ADS)
Syan, C. S.; Ramoutar, K.
2014-07-01
There is a constant search for ways to achieve a competitive advantage through new manufacturing techniques. Best performing manufacturing companies tend to use world-class manufacturing (WCM) practices. Although the last few years have witnessed phenomenal growth in the use of WCM techniques, their effectiveness is not well understood specifically in the context of less developed countries. This paper presents an empirical study to investigate the impact of company size on improving manufacturing performance in manufacturing organizations based in Trinidad and Tobago (T&T). Empirical data were collected via a questionnaire survey which was send to 218 manufacturing firms in T&T. Five different company sizes and seven different industry sectors were studied. The analysis of survey data was performed with the aid of Statistical Package for Social Sciences (SPSS) software. The study signified facilitating and impeding factors towards improving manufacturing performance. Their relative impact/importance is dependent on varying company size and industry sectors. Findings indicate that T&T manufacturers are still practicing traditional approaches, when compared with world class manufacturers. In the majority of organizations, these practices were not 100% implemented even though they started the implementation process more than 5 years ago. The findings provided some insights in formulating more optimal operational strategies, and later develop action plans towards more effective implementation of WCM in T&T manufacturers.
Liu, Hao; Shao, Qi; Fang, Xuelin
2017-02-01
For the class-E amplifier in a wireless power transfer (WPT) system, the design parameters are always determined by the nominal model. However, this model neglects the conduction loss and voltage stress of MOSFET and cannot guarantee the highest efficiency in the WPT system for biomedical implants. To solve this problem, this paper proposes a novel circuit model of the subnominal class-E amplifier. On a WPT platform for capsule endoscope, the proposed model was validated to be effective and the relationship between the amplifier's design parameters and its characteristics was analyzed. At a given duty ratio, the design parameters with the highest efficiency and safe voltage stress are derived and the condition is called 'optimal subnominal condition.' The amplifier's efficiency can reach the highest of 99.3% at the 0.097 duty ratio. Furthermore, at the 0.5 duty ratio, the measured efficiency of the optimal subnominal condition can reach 90.8%, which is 15.2% higher than that of the nominal condition. Then, a WPT experiment with a receiving unit was carried out to validate the feasibility of the optimized amplifier. In general, the design parameters of class-E amplifier in a WPT system for biomedical implants can be determined with the proposed optimization method in this paper.
A Peptide Filtering Relation Quantifies MHC Class I Peptide Optimization
Goldstein, Leonard D.; Howarth, Mark; Cardelli, Luca; Emmott, Stephen; Elliott, Tim; Werner, Joern M.
2011-01-01
Major Histocompatibility Complex (MHC) class I molecules enable cytotoxic T lymphocytes to destroy virus-infected or cancerous cells, thereby preventing disease progression. MHC class I molecules provide a snapshot of the contents of a cell by binding to protein fragments arising from intracellular protein turnover and presenting these fragments at the cell surface. Competing fragments (peptides) are selected for cell-surface presentation on the basis of their ability to form a stable complex with MHC class I, by a process known as peptide optimization. A better understanding of the optimization process is important for our understanding of immunodominance, the predominance of some T lymphocyte specificities over others, which can determine the efficacy of an immune response, the danger of immune evasion, and the success of vaccination strategies. In this paper we present a dynamical systems model of peptide optimization by MHC class I. We incorporate the chaperone molecule tapasin, which has been shown to enhance peptide optimization to different extents for different MHC class I alleles. Using a combination of published and novel experimental data to parameterize the model, we arrive at a relation of peptide filtering, which quantifies peptide optimization as a function of peptide supply and peptide unbinding rates. From this relation, we find that tapasin enhances peptide unbinding to improve peptide optimization without significantly delaying the transit of MHC to the cell surface, and differences in peptide optimization across MHC class I alleles can be explained by allele-specific differences in peptide binding. Importantly, our filtering relation may be used to dynamically predict the cell surface abundance of any number of competing peptides by MHC class I alleles, providing a quantitative basis to investigate viral infection or disease at the cellular level. We exemplify this by simulating optimization of the distribution of peptides derived from Human Immunodeficiency Virus Gag-Pol polyprotein. PMID:22022238
Chiefs' Pocket Guide to Class Size: A Research Synthesis to Inform State Class Size Policies
ERIC Educational Resources Information Center
Council of Chief State School Officers, 2012
2012-01-01
Few questions in public education discourse benefit as much from research-based evidence as the question of class size--the pursuit of the ideal number of students that should be co-located for any particular period of instruction. But for policymakers, research on class size can be an embarrassment of riches, and much of the research appears to…
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
Singh, Bhupinder; Khurana, Lalit; Bandyopadhyay, Shantanu; Kapil, Rishi; Katare, O O P
2011-11-01
Carvedilol, a widely prescribed cardiovascular drug for hypertension and congestive heart failure, exhibits low and variable bioavailability owing to poor absorption and extensive hepatic first-pass metabolism. The current research work, therefore, entails formulation development of liquid self-nano-emulsifying drug delivery systems (SNEDDS) to enhance the bioavailability of carvedilol by facilitating its transport via lymphatic circulation. The formulation constituents, i.e. lipids, surfactants, and co-surfactants, were selected on the basis of solubility studies. Pseudo-ternary phase diagrams were constructed to embark upon the selection of blend of lipidic (i.e. Capmul PG8) and hydrophilic components (i.e. Cremophor EL as surfactant and Transcutol HP as co-surfactant) for efficient and robust formulation of SNEDDS. The SNEDDS, systematically optimized employing a central composite design (CCD), were evaluated for various response variables viz drug release parameters, emulsification time, emulsion droplet size, and mean dissolution time. In vitro drug release studies depicted that the release from SNEDDS systems followed a non-Fickian kinetic behavior. The TEM imaging of the optimized formulation affirmed the uniform shape and nano size of the system. Accelerated studies of the optimized formulation indicated high stability of the formulation for 6 months. The in situ perfusion studies carried out in wistar rats construed several fold augmentation in the permeability and absorption potential of the optimized formulation vis-à-vis marketed formulation. Thus, the present studies ratified the potential of SNEDDS in augmenting the oral bioavailability of BCS class II drugs.
Optimization and resilience of complex supply-demand networks
NASA Astrophysics Data System (ADS)
Zhang, Si-Ping; Huang, Zi-Gang; Dong, Jia-Qi; Eisenberg, Daniel; Seager, Thomas P.; Lai, Ying-Cheng
2015-06-01
Supply-demand processes take place on a large variety of real-world networked systems ranging from power grids and the internet to social networking and urban systems. In a modern infrastructure, supply-demand systems are constantly expanding, leading to constant increase in load requirement for resources and consequently, to problems such as low efficiency, resource scarcity, and partial system failures. Under certain conditions global catastrophe on the scale of the whole system can occur through the dynamical process of cascading failures. We investigate optimization and resilience of time-varying supply-demand systems by constructing network models of such systems, where resources are transported from the supplier sites to users through various links. Here by optimization we mean minimization of the maximum load on links, and system resilience can be characterized using the cascading failure size of users who fail to connect with suppliers. We consider two representative classes of supply schemes: load driven supply and fix fraction supply. Our findings are: (1) optimized systems are more robust since relatively smaller cascading failures occur when triggered by external perturbation to the links; (2) a large fraction of links can be free of load if resources are directed to transport through the shortest paths; (3) redundant links in the performance of the system can help to reroute the traffic but may undesirably transmit and enlarge the failure size of the system; (4) the patterns of cascading failures depend strongly upon the capacity of links; (5) the specific location of the trigger determines the specific route of cascading failure, but has little effect on the final cascading size; (6) system expansion typically reduces the efficiency; and (7) when the locations of the suppliers are optimized over a long expanding period, fewer suppliers are required. These results hold for heterogeneous networks in general, providing insights into designing optimal and resilient complex supply-demand systems that expand constantly in time.
OVERVIEW OF MONO-ENERGETIC GAMMA-RAY SOURCES & APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartemann, F V; Albert, F; Anderson, G G
2010-05-18
Recent progress in accelerator physics and laser technology have enabled the development of a new class of tunable gamma-ray light sources based on Compton scattering between a high-brightness, relativistic electron beam and a high intensity laser pulse produced via chirped-pulse amplification (CPA). A precision, tunable Mono-Energetic Gamma-ray (MEGa-ray) source driven by a compact, high-gradient X-band linac is currently under development and construction at LLNL. High-brightness, relativistic electron bunches produced by an X-band linac designed in collaboration with SLAC NAL will interact with a Joule-class, 10 ps, diode-pumped CPA laser pulse to generate tunable {gamma}-rays in the 0.5-2.5 MeV photon energymore » range via Compton scattering. This MEGa-ray source will be used to excite nuclear resonance fluorescence in various isotopes. Applications include homeland security, stockpile science and surveillance, nuclear fuel assay, and waste imaging and assay. The source design, key parameters, and current status are presented, along with important applications, including nuclear resonance fluorescence. In conclusion, we have optimized the design of a high brightness Compton scattering gamma-ray source, specifically designed for NRF applications. Two different parameters sets have been considered: one where the number of photons scattered in a single shot reaches approximately 7.5 x 10{sup 8}, with a focal spot size around 8 {micro}m; in the second set, the spectral brightness is optimized by using a 20 {micro}m spot size, with 0.2% relative bandwidth.« less
Does Class Size Make a Difference?
ERIC Educational Resources Information Center
Glass, Gene V.; Down, A. Graham
1979-01-01
Argues that study findings indicate that lowered class size increases student achievement and improves school attitudes. Counter argument indicates there is little educational payoff and great monetary expense in small reductions in class size. (RH)
ERIC Educational Resources Information Center
Graham, Evol
2009-01-01
By reducing class size we will close the achievement gap in public school education, caused by prior neglect especially since the civil rights era of the sixties. Additional, highly qualified and specialized teachers will more effectively manage a smaller class size and serve more individual student needs in the crucial early grades, where a solid…
Bounded-Degree Approximations of Stochastic Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar
2017-06-01
We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less
Joseph B. Roise; Joosang Chung; Chris B. LeDoux
1988-01-01
Nonlinear programming (NP) is applied to the problem of finding optimal thinning and harvest regimes simultaneously with species mix and diameter class distribution. Optimal results for given cases are reported. Results of the NP optimization are compared with prescriptions developed by Appalachian hardwood silviculturists.
Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes.
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Tonini, Roberto; Macedonio, Giovanni; Folch, Arnau; Sulpizio, Roberto
2016-04-12
Assessment of volcanic hazards is necessary for risk mitigation. Typically, hazard assessment is based on one or a few, subjectively chosen representative eruptive scenarios, which use a specific combination of eruptive sizes and intensities to represent a particular size class of eruption. While such eruptive scenarios use a range of representative members to capture a range of eruptive sizes and intensities in order to reflect a wider size class, a scenario approach neglects to account for the intrinsic variability of volcanic eruptions, and implicitly assumes that inter-class size variability (i.e. size difference between different eruptive size classes) dominates over intra-class size variability (i.e. size difference within an eruptive size class), the latter of which is treated as negligible. So far, no quantitative study has been undertaken to verify such an assumption. Here, we adopt a novel Probabilistic Volcanic Hazard Analysis (PVHA) strategy, which accounts for intrinsic eruptive variabilities, to quantify the tephra fallout hazard in the Campania area. We compare the results of the new probabilistic approach with the classical scenario approach. The results allow for determining whether a simplified scenario approach can be considered valid, and for quantifying the bias which arises when full variability is not accounted for.
ERIC Educational Resources Information Center
Sharp, Mark A.
The purpose of this paper was to share findings from an earlier study and to provide a framework for administrators to use in the implementation of class-size reduction (CSR) in their buildings. The study examined actual and average class size (CS), pupil-teacher ratios (PTR), and their differences. A primary goal was to clarify the ramifications…
Fundamental differences between optimization code test problems in engineering applications
NASA Technical Reports Server (NTRS)
Eason, E. D.
1984-01-01
The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.
Segmentation of thalamus from MR images via task-driven dictionary learning
NASA Astrophysics Data System (ADS)
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.
2016-03-01
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.
Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L
2016-02-27
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.
ePix: a class of architectures for second generation LCLS cameras
Dragone, A.; Caragiulo, P.; Markovic, B.; ...
2014-03-31
ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less
Enabling a Better Aft Heat Shield Solution for Future Mars Science Laboratory Class Vehicles
NASA Technical Reports Server (NTRS)
McGuire, Mary K.; Covington, Melmoth A.; Goldstein, Howard E.; Arnold, James O.; Beck, Robin
2013-01-01
System studies are described that compare masses and estimated manufacturing costs of options for the as-flown Mars Science Laboratory (MSL) aft body Thermal Light Weight Ablator (SLA) 561-V and its thickness was not optimized using the standard TPS Sizer Tool widely used for heat shield design. Use of the TPS sizing tool suggests that optimization of the SLA thickness could reduce the aft heat shield mass by 40 percent. Analysis of the predicted aft-shell aerothermodynamics suggests that the bulk of MSL class entry vehicle heat shields could incorporate Advanced Flexible Reusable Surface Insulation (AFRSI). AFRSI has a wellestablished record of relatively inexpensive manufacturing and flight certification based on its use on the lee side of the Space Shuttle. Runs with the TPS Sizer show that the AFRSI solution would be 60 percent lighter than the as-flown SLA. The issue of Reaction Control System (RCS) heating on the aft shell could be addressed by locally impregnating the AFRSI with silicone to enhance its robustness to short bursts ofheating. Stagnation point arcjet testing has shown that silicone impregnated AFRSI performs well at heat rates of 115 W/cm2 and 0.1 atmospheres for a duration of 40 seconds, far beyond conditions that are expected for MSL class vehicles. The paper concludes with a discussion of manufacturing processes for AFRSI, impregnation approaches and relative cost comparisons to the SLA solution.
School Class Size: Research and Policy
ERIC Educational Resources Information Center
Glass, Gene V.; And Others
This book synthesizes research evidence to demonstrate that 1) class size is strongly related to pupil achievement; 2) smaller classes are more conducive to improved pupil performance than larger classes; 3) smaller classes provide more opportunities to adapt learning programs to individual needs; 4) pupils in smaller classes have more interest in…
Optimally Stopped Optimization
NASA Astrophysics Data System (ADS)
Vinci, Walter; Lidar, Daniel A.
2016-11-01
We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.
Class Size and Student Diversity: Two Sides of the Same Coin. Teacher Voice
ERIC Educational Resources Information Center
Froese-Germain, Bernie; Riel, Rick; McGahey, Bob
2012-01-01
Among Canadian teacher unions, discussions of class size are increasingly being informed by the importance of considering the diversity of student needs within the classroom (often referred to as class composition). For teachers, both class size and diversity matter. Teachers consistently adapt their teaching to address the individual needs of the…
Class Size Reduction: Implementation and Solutions.
ERIC Educational Resources Information Center
Krieger, Jean
This is a report of a study designed to discover the nature of interactions between teachers and students in regular-size classes (25 or more students) and small-size classes (fewer than 18 students). It also describes the efforts of one public school to maintain smaller classes. A review of the literature and observations of 11 primary classrooms…
Additional Evidence on the Relationship between Class Size and Student Performance
ERIC Educational Resources Information Center
Arias, J. J.; Walker, Douglas M.
2004-01-01
Much of the economic education literature suggests that the principles of economics class size does not significantly affect student performance. However, study methods have varied in terms of the aggregation level (student or class), the measure of performance (TUCE or course letter grade), and the class size measure (e.g., students who completed…
Meeting the Public Health Challenge of Pain in Later Life: What Role Can Senior Centers Play?
Tobias, Karen R.; Lama, Sonam D.; Parker, Samantha J.; Henderson, Charles R.; Nickerson, Allison J.; Reid, M.C.
2013-01-01
Background Interest in nonpharmacologic approaches for managing pain continues to grow. Aim To determine the types of pain-relevant programs offered by senior centers and whether the programs varied by clients' race/ethnicity status and center size. Design and methods We conducted a telephone survey. Respondents were presented with a list of 15 programs (plus other) and asked: (1) whether the activity was offered and if so how often; (2) if they believed the programs had value for seniors with pain; and (3) whether the classes were advertised as a means of achieving pain relief. Setting New York City. Participants/Subjects Senior center agency staff, i.e., center directors, activity program coordinators. Results Of 204 center staff contacted, 195 (95.6%) participated. The most common programs offered were movement-based, including exercise (by 91.8% of the centers), dance (72.3%), walking clubs (71.8%), yoga (65.6%), and Tai Chi (53.3%) classes. Creative arts programs were also frequently offered, including music (58.5%) and fine arts (47.7%). Programs such as stress management (27%) and relaxation (26%) classes were less commonly offered. Most respondents identified movement-based programs as helpful for seniors with pain, while few identified creative arts classes as potentially beneficial. The programs/classes offered were infrequently advertised as a means of helping seniors manage pain, and varied by clients' race/ethnicity status and center size. Conclusion Programs that have potential utility for older adults with pain are commonly offered by senior centers. Future research should determine optimal strategies for engaging older adults in these programs in the senior center setting. PMID:24144569
Meeting the public health challenge of pain in later life: what role can senior centers play?
Tobias, Karen R; Lama, Sonam D; Parker, Samantha J; Henderson, Charles R; Nickerson, Allison J; Reid, M Carrington
2014-12-01
Interest in nonpharmacologic approaches for managing pain continues to grow. The aim of this study was to determine the types of pain-relevant programs offered by senior centers and whether the programs varied by clients' race/ethnicity status and center size. A telephone survey was conducted. Respondents were presented with a list of 15 programs and the option to choose "other" and asked (1) whether the activity was offered and, if so, how often; (2) if they believed the programs had value for seniors with pain; and (3) whether the classes were advertised as a means of achieving pain relief. Of 204 center staff contacted, 195 (95.6%) participated. The most common programs offered were movement-based, including exercise (by 91.8% of the centers), dance (72.3%), walking clubs (71.8%), yoga (65.6%), and Tai Chi (53.3%) classes. Creative arts programs were also frequently offered, including music (58.5%) and fine arts (47.7%). Programs such as stress management (27%) and relaxation (26%) classes were less commonly offered. Most respondents identified movement-based programs as helpful for seniors with pain, but few identified creative arts classes as potentially beneficial. The programs/classes offered were infrequently advertised as a means of helping seniors manage pain and varied by clients' race/ethnicity status and center size. Programs that have potential utility for older adults with pain are commonly offered by senior centers. Future research should determine optimal strategies for engaging older adults in these programs in the senior center setting. Copyright © 2014 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Some comments on Anderson and Pospahala's correction of bias in line transect sampling
Anderson, D.R.; Burnham, K.P.; Chain, B.R.
1980-01-01
ANDERSON and POSPAHALA (1970) investigated the estimation of wildlife population size using the belt or line transect sampling method and devised a correction for bias, thus leading to an estimator with interesting characteristics. This work was given a uniform mathematical framework in BURNHAM and ANDERSON (1976). In this paper we show that the ANDERSON-POSPAHALA estimator is optimal in the sense of being the (unique) best linear unbiased estimator within the class of estimators which are linear combinations of cell frequencies, provided certain assumptions are met.
Constraints on the adult-offspring size relationship in protists.
Caval-Holme, Franklin; Payne, Jonathan; Skotheim, Jan M
2013-12-01
The relationship between adult and offspring size is an important aspect of reproductive strategy. Although this filial relationship has been extensively examined in plants and animals, we currently lack comparable data for protists, whose strategies may differ due to the distinct ecological and physiological constraints on single-celled organisms. Here, we report measurements of adult and offspring sizes in 3888 species and subspecies of foraminifera, a class of large marine protists. Foraminifera exhibit a wide range of reproductive strategies; species of similar adult size may have offspring whose sizes vary 100-fold. Yet, a robust pattern emerges. The minimum (5th percentile), median, and maximum (95th percentile) offspring sizes exhibit a consistent pattern of increase with adult size independent of environmental change and taxonomic variation over the past 400 million years. The consistency of this pattern may arise from evolutionary optimization of the offspring size-fecundity trade-off and/or from cell-biological constraints that limit the range of reproductive strategies available to single-celled organisms. When compared with plants and animals, foraminifera extend the evidence that offspring size covaries with adult size across an additional five orders of magnitude in organism size. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Class Size Effects on Student Achievement: Heterogeneity across Abilities and Fields
ERIC Educational Resources Information Center
De Paola, Maria; Ponzo, Michela; Scoppa, Vincenzo
2013-01-01
In this paper, we analyze class size effects on college students exploiting data from a project offering special remedial courses in mathematics and language skills to freshmen enrolled at an Italian medium-sized public university. To estimate the effects of class size, we exploit the fact that students and teachers are virtually randomly assigned…
ERIC Educational Resources Information Center
Glass, Gene V.; Smith, Mary Lee
The first in a series of reports by the Far West Laboratory for Educational Research and Development, this report demonstrates the positive relationship between reduced class size and pupil achievement. The researchers collected about 80 studies that yielded over 700 comparisons of the achievement of smaller and larger classes. The results showed…
Application of identifying transmission spheres for spherical surface testing
NASA Astrophysics Data System (ADS)
Han, Christopher B.; Ye, Xin; Li, Xueyuan; Wang, Quanzhao; Tang, Shouhong; Han, Sen
2017-06-01
We developed a new application on Microsoft Foundation Classes (MFC) to identify correct transmission spheres (TS) for Spherical Surface Testing (SST). Spherical surfaces are important optical surfaces, and the wide application and high production rate of spherical surfaces necessitates an accurate and highly reliable measuring device. A Fizeau Interferometer is an appropriate tool for SST due to its subnanometer accuracy. It measures the contour of a spherical surface using a common path, which is insensitive to the surrounding circumstances. The Fizeau Interferometer transmits a wide laser beam, creating interference fringes from re-converging light from the transmission sphere and the test surface. To make a successful measurement, the application calculates and determines the appropriate transmission sphere for the test surface. There are 3 main inputs from the test surfaces that are utilized to determine the optimal sizes and F-numbers of the transmission spheres: (1) the curvatures (concave or convex), (2) the Radii of Curvature (ROC), and (3) the aperture sizes. The application will firstly calculate the F-numbers (i.e. ROC divided by aperture) of the test surface, secondly determine the correct aperture size of a convex surface, thirdly verify that the ROC of the test surface must be shorter than the reference surface's ROC of the transmission sphere, and lastly calculate the percentage of area that the test surface will be measured. However, the amount of interferometers and transmission spheres should be optimized when measuring large spherical surfaces to avoid requiring a large amount of interferometers and transmission spheres for each test surface. Current measuring practices involve tedious and potentially inaccurate calculations. This smart application eliminates human calculation errors, optimizes the selection of transmission spheres (including the least number required) and interferometer sizes, and increases efficiency.
Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes
Sandri, Laura; Costa, Antonio; Selva, Jacopo; Tonini, Roberto; Macedonio, Giovanni; Folch, Arnau; Sulpizio, Roberto
2016-01-01
Assessment of volcanic hazards is necessary for risk mitigation. Typically, hazard assessment is based on one or a few, subjectively chosen representative eruptive scenarios, which use a specific combination of eruptive sizes and intensities to represent a particular size class of eruption. While such eruptive scenarios use a range of representative members to capture a range of eruptive sizes and intensities in order to reflect a wider size class, a scenario approach neglects to account for the intrinsic variability of volcanic eruptions, and implicitly assumes that inter-class size variability (i.e. size difference between different eruptive size classes) dominates over intra-class size variability (i.e. size difference within an eruptive size class), the latter of which is treated as negligible. So far, no quantitative study has been undertaken to verify such an assumption. Here, we adopt a novel Probabilistic Volcanic Hazard Analysis (PVHA) strategy, which accounts for intrinsic eruptive variabilities, to quantify the tephra fallout hazard in the Campania area. We compare the results of the new probabilistic approach with the classical scenario approach. The results allow for determining whether a simplified scenario approach can be considered valid, and for quantifying the bias which arises when full variability is not accounted for. PMID:27067389
Small Class Size and Its Effects.
ERIC Educational Resources Information Center
Biddle, Bruce J.; Berliner, David C.
2002-01-01
Describes several prominent early grades small-class-size projects and their effects on student achievement: Indiana's Project Prime Time, Tennessee's Project STAR (Student/Teacher Achievement Ratio), Wisconsin's SAGE (Student Achievement Guarantee in Education) Program, and the California class-size-reduction program. Lists several conclusions,…
SRM-Assisted Trajectory for the GTX Reference Vehicle
NASA Technical Reports Server (NTRS)
Riehl, John; Trefny, Charles; Kosareo, Daniel
2002-01-01
A goal of the GTX effort has been to demonstrate the feasibility of a single stage- to- orbit (SSTO) vehicle that delivers a small payload to low earth orbit. The small payload class was chosen in order to minimize the risk and cost of development of this revolutionary system. A preliminary design study by the GTX team has resulted in the current configuration that offers considerable promise for meeting the stated goal. The size and gross lift-off weight resulting from scaling the current design to closure however may be considered impractical for the small payload. In lieu of evolving the project's reference vehicle to a large-payload class, this paper offers the alternative of using solid-rocket motors in order to close the vehicle at a practical scale. This approach offers a near-term, quasi-reusable system that easily evolves to reusable SSTO following subsequent development and optimization. This paper presents an overview of the impact of the addition of SRM's to the GTX reference vehicle's performance and trajectory. The overall methods of vehicle modeling and trajectory optimization will also be presented. A key element in the trajectory optimization is the use of the program OTIS 3.10 that provides rapid convergence and a great deal of flexibility to the user. This paper will also present the methods used to implement GTX requirements into OTIS modeling.
SRM-Assisted Trajectory for the GTX Reference Vehicle
NASA Technical Reports Server (NTRS)
Riehl, John; Trefny, Charles; Kosareo, Daniel (Technical Monitor)
2002-01-01
A goal of the GTX effort has been to demonstrate the feasibility of a single stage-to-orbit (SSTO) vehicle that delivers a small payload to low earth orbit. The small payload class was chosen in order to minimize the risk and cost of development of this revolutionary system. A preliminary design study by the GTX team has resulted in the current configuration that offers considerable promise for meeting the stated goal. The size and gross lift-off weight resulting from scaling the current design to closure however may be considered impractical for the small payload. In lieu of evolving the project' reference vehicle to a large-payload class, this paper offers the alternative of using solid-rocket motors in order to close the vehicle at a practical scale. This approach offers a near-term, quasi-reusable system that easily evolves to reusable SSTO following subsequent development and optimization. This paper presents an overview of the impact of the addition of SRM's to the GTX reference vehicle#s performance and trajectory. The overall methods of vehicle modeling and trajectory optimization will also be presented. A key element in the trajectory optimization is the use of the program OTIS 3.10 that provides rapid convergence and a great deal of flexibility to the user. This paper will also present the methods used to implement GTX requirements into OTIS modeling.
God, Jason M; Zhao, Dan; Cameron, Christine A; Amria, Shereen; Bethard, Jennifer R; Haque, Azizul
2014-01-01
While Burkitt lymphoma (BL) has a well-known defect in HLA class I-mediated antigen presentation, the exact role of BL-associated HLA class II in generating a poor CD4+ T-cell response remains unresolved. Here, we found that BL cells are deficient in their ability to optimally stimulate CD4+ T cells via the HLA class II pathway. This defect in CD4+ T-cell recognition was not associated with low levels of co-stimulatory molecules on BL cells, as addition of external co-stimulation failed to elicit CD4+ T-cell activation by BL. Further, the defect was not caused by faulty antigen/class II interaction, because antigenic peptides bound with measurable affinity to BL-associated class II molecules. Interestingly, functional class II–peptide complexes were formed at acidic pH 5·5, which restored immune recognition. Acidic buffer (pH 5·5) eluate from BL cells contained molecules that impaired class II-mediated antigen presentation and CD4+ T-cell recognition. Biochemical analysis showed that these molecules were greater than 30 000 molecular weight in size, and proteinaceous in nature. In addition, BL was found to have decreased expression of a 47 000 molecular weight enolase-like molecule that enhances class II-mediated antigen presentation in B cells, macrophages and dendritic cells, but not in BL cells. These findings demonstrate that BL likely has multiple defects in HLA class II-mediated antigen presentation and immune recognition, which may be exploited for future immunotherapies. PMID:24628049
NASA Astrophysics Data System (ADS)
Lovell, T. Alan; Schmidt, D. K.
1994-03-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
NASA Technical Reports Server (NTRS)
Lovell, T. Alan; Schmidt, D. K.
1994-01-01
The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.
ERIC Educational Resources Information Center
Ellis, Thomas I.
1985-01-01
After a brief introduction identifying current issues and trends in research on class size, this brochure reviews five recent studies bearing on the relationship of class size to educational effectiveness. Part 1 is a review of two interrelated and highly controversial "meta-analyses" or statistical integrations of research findings on…
Class Size Reduction in California: Summary of the 1998-99 Evaluation Findings.
ERIC Educational Resources Information Center
Stecher, Brian M.; Bohrnstedt, George W.
This report discusses the results of the third year--1998-99--of California's Class Size Reduction (CSR) program. Assessments of the program show that CSR was almost fully implemented by 1998-99, with over 92 percent of students in K-3 in classes of 20 or fewer students. Those K-3 classes that had not been reduced in size were concentrated in…
ERIC Educational Resources Information Center
Harfitt, Gary James
2012-01-01
Class size research suggests that teachers do not vary their teaching strategies when moving from large to smaller classes. This study draws on interviews and classroom observations of three experienced English language teachers working with large and reduced-size classes in Hong Kong secondary schools. Findings from the study point to subtle…
Focus on California's Class-Size Reduction: Smaller Classes Aim To Launch Early Literacy.
ERIC Educational Resources Information Center
McRobbie, Joan
Smaller class sizes in California were viewed as a way to improve K-3 education, especially in the area of literacy. The urgency to act prompted state leaders to adopt class-size reduction (CSR) without knowing for sure that it would work and without establishing a formal procedure for evaluating the program. This report looks at past research on…
A simple approach to optimal control of invasive species.
Hastings, Alan; Hall, Richard J; Taylor, Caz M
2006-12-01
The problem of invasive species and their control is one of the most pressing applied issues in ecology today. We developed simple approaches based on linear programming for determining the optimal removal strategies of different stage or age classes for control of invasive species that are still in a density-independent phase of growth. We illustrate the application of this method to the specific example of invasive Spartina alterniflora in Willapa Bay, WA. For all such systems, linear programming shows in general that the optimal strategy in any time step is to prioritize removal of a single age or stage class. The optimal strategy adjusts which class is the focus of control through time and can be much more cost effective than prioritizing removal of the same stage class each year.
The Class Size Policy Debate. Working Paper No. 121.
ERIC Educational Resources Information Center
Krueger, Alan B.; Hanushek, Eric A.
These papers examine research on the impact of class size on student achievement. After an "Introduction," (Richard Rothstein), Part 1, "Understanding the Magnitude and Effect of Class Size on Student Achievement" (Alan B. Krueger), presents a reanalysis of Hanushek's 1997 literature review, criticizing Hanushek's vote-counting…
NASA Astrophysics Data System (ADS)
Hegazy, Ahmad K.; Kabiel, Hanan F.
2007-05-01
Anastatica hierochuntica L. (Brassicaceae) is a desert monocarpic annual species characterized by a topochory/ombrohydrochory type of seed dispersal. The hygrochastic nature of the dry skeletons (dead individuals) permits controlling seed dispersal by rain events. The amount of dispersed seeds is proportional to the intensity of rainfall. When light showers occur, seeds are released and remain in the site. Seeds dispersed in the vicinity of the mother or source plant (primary type of seed dispersal) resulted in clumped pattern and complicated interrelationships among size-classes of the population. Following heavy rainfall, most seeds are released and transported into small patches and shallow depressions which collect runoff water. The dead A. hierochuntica skeletons demonstrate site-dependent size-class structure, spatial pattern and spatial interrelationships in different microhabitats. Four microhabitat types have been sampled: runnels, patches and simple and compound depressions in two sites (gravel and sand). Ripley's K-function was used to analyze the spatial pattern in populations of A. hierochuntica skeletons in the study microhabitats. Clumped patterns were observed in nearly all of the study microhabitats. Populations of A. hierochuntica in the sand site were more productive than in the gravel site and usually had more individuals in the larger size-classes. In the compound-depression microhabitat, the degree of clumping decreased from the core zone to the intermediate zone then shifted into overdispersed pattern in the outer zone. At the within size-class level, the clumped pattern dominated in small size classes but shifted into random and overdispersed patterns in the larger size classes. Aggregation between small and large size-classes was not well-defined but large individuals were found closer to the smaller individuals than to those of their own class. In relation to the phytomass and the size-class structure, the outer zone of the simple depression and the outer and intermediate zones of the compound depression microhabitats were the most productive sites.
Exact solution of large asymmetric traveling salesman problems.
Miller, D L; Pekny, J F
1991-02-15
The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.
Fast optimization of binary clusters using a novel dynamic lattice searching method.
Wu, Xia; Cheng, Wen
2014-09-28
Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd)79 clusters with DFT-fit parameters of Gupta potential.
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
ERIC Educational Resources Information Center
Mishel, Lawrence, Ed.; Rothstein, Richard, Ed.
This collection of papers debates the merits of smaller class sizes and research methods used to evaluate the efficacy of this education reform measure. Four chapters focus on (1) "Understanding the Magnitude and Effect of Class Size on Student Achievement" (Alan B. Krueger), which discusses expenditures per student and economic criterion; (2)…
Researcher Perspectives on Class Size Reduction
ERIC Educational Resources Information Center
Graue, Elizabeth; Rauscher, Erica
2009-01-01
This article applies to class size research Grant and Graue's (1999) position that reviews of research represent conversations in the academic community. By extending our understanding of the class size reduction conversation beyond published literature to the perspectives of researchers who have studied the topic, we create a review that includes…
76 FR 59116 - Procurement List; Additions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
... NSN: AF110--Shirt, Class A/Primary Duty, USAF, Men's, Long Sleeve, Dark Navy Blue, Numerous Sizes. NSN: AF111--Shirt, Class A/Primary Duty, USAF, Women's, Long Sleeve, Dark Navy Blue, Numerous Sizes. NSN: AF120--Shirt, Class A/Primary Duty, USAF, Men's, Short Sleeve, Dark Navy Blue, Numerous Sizes. NSN...
Compilation of Class Size Findings: Grade Level, School, and District.
ERIC Educational Resources Information Center
Miller-Whitehead, Marie
This study provides an overview of class size research, examples of various class size and pupil-teacher-ratio (PTR) configurations commonly used by practitioners, and the most recent findings of scientifically controlled experimental Tennessee STAR studies. The learning environment is hierarchical in nature, with student-level data influenced by…
ERIC Educational Resources Information Center
Roza, Marguerite; Ouijdani, Monica
2012-01-01
Two seemingly different threads are in play on the issue of class size. The first is manifested in media reports that tell readers that class sizes are rising to concerning levels. The second thread appears in the work of some researchers and education leaders and suggests that repurposing class-size reduction funds to pay for other reforms may…
Class Size, Academic Achievement and Public Policy.
ERIC Educational Resources Information Center
Ziegler, Suzanne
1997-01-01
This report addresses some of the concerns surrounding smaller classes and looks at whether reduced class sizes result in higher achievement levels, and concludes that it in fact does increase student achievement, so long as classes do not exceed 17 students. But many critics question whether the high cost of reducing classes to 17 or fewer…
Hu, Ning; Ma, Zhi-min; Lan, Jia-cheng; Wu, Yu-chun; Chen, Gao-qi; Fu, Wa-li; Wen, Zhi-lin; Wang, Wen-jing
2015-09-01
In order to illuminate the impact on soil nitrogen accumulation and supply in karst rocky desertification area, the distribution characteristics of soil nitrogen pool for each class of soil aggregates and the relationship between aggregates nitrogen pool and soil nitrogen mineralization were analyzed in this study. The results showed that the content of total nitrogen, light fraction nitrogen, available nitrogen and mineral nitrogen in soil aggregates had an increasing tendency along with the descending of aggregate-size, and the highest content was occurred in < 0. 25 mm. The content of nitrogen fractions for all aggregate-classes followed in the order of abandoned land < grass land < brush land < brush-arbor land < arbor land in different sample plots. Artificial forest lands had more effects on the improvement of the soil nitrogen than honeysuckle land. In this study it also showed the nitrogen stockpiling quantity of each aggregate-size class was differed in all aggregate-size classes, in which the content of nitrogen fraction in 5-10 mm and 2-5 mm classes of soil aggregate-size were the highest. And it meant that soil nutrient mainly was stored in large size aggregates. Large size aggregates were significant to the storage of soil nutrient. For each class of soil aggregate-size, the contribution of the nitrogen stockpiling quantity of 0. 25-1 mm class to soil net nitrogen mineralization quantity was the biggest, and following >5mm and 2-5 mm classes, and the others were the smallest. With the positive vegetation succession, the weight percentage of > 5 mm aggregate-size classes was improved and the nitrogen storage of macro-aggregates also was increased. Accordingly, the capacity of soil supply mineral nitrogen and storage organic nitrogen were intensified.
Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area
NASA Astrophysics Data System (ADS)
Min, Li; Xin, Yang; Liyang, Xiong
2016-06-01
Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.
Grover's unstructured search by using a transverse field
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Rieffel, Eleanor; Wang, Zhihui
2017-04-01
We design a circuit-based quantum algorithm to search for a needle in a haystack, giving the same quadratic speedup achieved by Grover's original algorithm. In our circuit-based algorithm, the problem Hamiltonian (oracle) and a transverse field (instead of Grover's diffusion operator) are applied to the system alternatively. We construct a periodic time sequence such that the resultant unitary drives a closed transition between two states, which have high degrees of overlap with the initial state (even superposition of all states) and the target state, respectively. Let N =2n be the size of the search space. The transition rate in our algorithm is of order Θ(1 /√{ N}) , and the overlaps are of order Θ(1) , yielding a nearly optimal query complexity of T =√{ N}(π / 2√{ 2}) . Our algorithm is inspired by a class of algorithms proposed by Farhi et al., namely the Quantum Approximate Optimization Algorithm (QAOA); our method offers a route to optimizing the parameters in QAOA by restricting them to be periodic in time.
ERIC Educational Resources Information Center
Bowne, Jocelyn Bonnes; Magnuson, Katherine A.; Schindler, Holly S.; Duncan, Greg J.; Yoshikawa, Hirokazu
2017-01-01
This study uses data from a comprehensive database of U.S. early childhood education program evaluations published between 1960 and 2007 to evaluate the relationship between class size, child-teacher ratio, and program effect sizes for cognitive, achievement, and socioemotional outcomes. Both class size and child-teacher ratio showed nonlinear…
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
Class Size: A Battle between Accountability and Quality Instruction
ERIC Educational Resources Information Center
Januszka, Cynthia; Dixon-Krauss, Lisbeth
2008-01-01
A substantial amount of controversy surrounds the issue of class size in public schools. Parents and teachers are on one side, touting the benefits of smaller class sizes (e.g., increased academic achievement, greater student-teacher interaction, utilization of more innovative teaching strategies, and a decrease in discipline problems). On the…
Another Look at the Glass and Smith Study on Class Size
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
One of the most influential studies affecting educational policy is Glass and Smith's 1978 study, "Meta-Analysis of Research on the Relationship of Class-Size and Achievement." Since its publication, educational policymakers have referenced it frequently as the justification for reducing class size. While teachers and the public had long believed…
Class Size and Academic Achievement in Introductory Political Science Courses
ERIC Educational Resources Information Center
Towner, Terri L.
2016-01-01
Research on the influence of class size on student academic achievement is important for university instructors, administrators, and students. The article examines the influence of class size--a small section versus a large section--in introductory political science courses on student grades in two comparable semesters. It is expected that…
Class Size Revisited: Glass and Smith in Perspective.
ERIC Educational Resources Information Center
Hess, Fritz
Gene V. Glass and Mary Lee Smith claim in their report, "Meta-Analysis of Research on the Relationship of Class-Size and Achievement" (ED 168 129), that their integration of data from 80 previous studies through complex regression analysis techniques revealed a "clear and strong relationship" between decreases in class size and increases in…
Class Size Reduction and Academic Achievement of Low-Socioeconomic Students
ERIC Educational Resources Information Center
Rollins, Sarah E.
2013-01-01
Concern about the academic and social well-being of public education in the United States has been at the forefront of education reform. Increased class sizes, amended curriculum standards, and accountability standards have guided the way toward ways to reduce class sizes to meet the demands put upon educators. This study investigated the…
The Effects of Videoconferencing, Class Size, and Learner Characteristics on Training Outcomes
ERIC Educational Resources Information Center
Brown, Kenneth G.; Rietz, Thomas A.; Sugrue, Brenda
2005-01-01
We examined direct and interaction effects of learners' characteristics (cognitive ability, prior knowledge, prior experience, and motivation to learn) and classroom characteristics (videoconferencing and class size) on learning from a 16-week course. A 2x2 quasi-experimental design varied the class size between large (approximately 60 students)…
How Class Size Makes a Difference. Research & Development.
ERIC Educational Resources Information Center
Egelson, Paula; Harman, Patrick; Hood, Art; Achilles, C. M.
Landmark studies in the late 1970s and 1980s, including Tennessee's Project STAR (Student Teacher Achievement Ratio), raised the nation's awareness that reduced class size does have a positive impact on students' academic achievement. This report provides a sketch of class-size reduction's history in a prefatory overview. Chapter 1 describes…
Effects of Class Size and Attendance Policy on University Classroom Interaction in Taiwan
ERIC Educational Resources Information Center
Bai, Yin; Chang, Te-Sheng
2016-01-01
Classroom interaction experience is one of the main parts of students' learning lives. However, surprisingly little research has investigated students' perceptions of classroom interaction with different attendance policies across different class sizes in the higher education system. To elucidate the effects of class size and attendance policy on…
Multilevel algorithms for nonlinear optimization
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Dennis, J. E., Jr.
1994-01-01
Multidisciplinary design optimization (MDO) gives rise to nonlinear optimization problems characterized by a large number of constraints that naturally occur in blocks. We propose a class of multilevel optimization methods motivated by the structure and number of constraints and by the expense of the derivative computations for MDO. The algorithms are an extension to the nonlinear programming problem of the successful class of local Brown-Brent algorithms for nonlinear equations. Our extensions allow the user to partition constraints into arbitrary blocks to fit the application, and they separately process each block and the objective function, restricted to certain subspaces. The methods use trust regions as a globalization strategy, and they have been shown to be globally convergent under reasonable assumptions. The multilevel algorithms can be applied to all classes of MDO formulations. Multilevel algorithms for solving nonlinear systems of equations are a special case of the multilevel optimization methods. In this case, they can be viewed as a trust-region globalization of the Brown-Brent class.
Biomass and productivity of three phytoplankton size classes in San Francisco Bay.
Cole, B.E.; Cloern, J.E.; Alpine, A.E.
1986-01-01
The 5-22 mu m size accounted for 40-50% of annual production in each embayment, but production by phytoplanton >22 mu m ranged from 26% in the S reach to 54% of total phytoplankton production in the landward embayment of the N reach. A productivity index is derived that predicts daily productivity for each size class as a function of ambient irradiance and integrated chlorophyll a in the photic zone. For the whole phytoplankton community and for each size class, this index was constant at approx= 0.76 g C m-2 (g chlorophyll a Einstein)-1. The annual means of maximum carbon assimilation numbers were usually similar for the three size classes. Spatial and temporal variations in size-fractionated productivity are primarily due to differences in biomass rather than size-dependent carbon assimilation rates. -from Authors
NASA Astrophysics Data System (ADS)
Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni
2014-05-01
The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.
Influences of landscape heterogeneity on home-range sizes of brown bears
Mangipane, Lindsey S.; Belant, Jerrold L.; Hiller, Tim L.; Colvin, Michael E.; Gustine, David; Mangipane, Buck A.; Hilderbrand, Grant V.
2018-01-01
Animal space use is influenced by many factors and can affect individual survival and fitness. Under optimal foraging theory, individuals use landscapes to optimize high-quality resources while minimizing the amount of energy used to acquire them. The spatial resource variability hypothesis states that as patchiness of resources increases, individuals use larger areas to obtain the resources necessary to meet energetic requirements. Additionally, under the temporal resource variability hypothesis, seasonal variation in available resources can reduce distances moved while providing a variety of food sources. Our objective was to determine if seasonal home ranges of brown bears (Ursus arctos) were influenced by temporal availability and spatial distribution of resources and whether individual reproductive status, sex, or size (i.e., body mass) mediated space use. To test our hypotheses, we radio collared brown bears (n = 32 [9 male, 23 female]) in 2014–2016 and used 18 a prioriselected linear models to evaluate seasonal utilization distributions (UD) in relation to our hypotheses. Our top-ranked model by AICc, supported the spatial resource variability hypothesis and included percentage of like adjacency (PLADJ) of all cover types (P < 0.01), reproductive class (P > 0.17 for males, solitary females, and females with dependent young), and body mass (kg; P = 0.66). Based on this model, for every percentage increase in PLADJ, UD area was predicted to increase 1.16 times for all sex and reproductive classes. Our results suggest that landscape heterogeneity influences brown bear space use; however, we found that bears used larger areas when landscape homogeneity increased, presumably to gain a diversity of food resources. Our results did not support the temporal resource variability hypothesis, suggesting that the spatial distribution of food was more important than seasonal availability in relation to brown bear home range size.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
Alegana, Victor A; Wright, Jim; Bosco, Claudio; Okiro, Emelda A; Atkinson, Peter M; Snow, Robert W; Tatem, Andrew J; Noor, Abdisalan M
2017-11-21
One pillar to monitoring progress towards the Sustainable Development Goals is the investment in high quality data to strengthen the scientific basis for decision-making. At present, nationally-representative surveys are the main source of data for establishing a scientific evidence base, monitoring, and evaluation of health metrics. However, little is known about the optimal precisions of various population-level health and development indicators that remains unquantified in nationally-representative household surveys. Here, a retrospective analysis of the precision of prevalence from these surveys was conducted. Using malaria indicators, data were assembled in nine sub-Saharan African countries with at least two nationally-representative surveys. A Bayesian statistical model was used to estimate between- and within-cluster variability for fever and malaria prevalence, and insecticide-treated bed nets (ITNs) use in children under the age of 5 years. The intra-class correlation coefficient was estimated along with the optimal sample size for each indicator with associated uncertainty. Results suggest that the estimated sample sizes for the current nationally-representative surveys increases with declining malaria prevalence. Comparison between the actual sample size and the modelled estimate showed a requirement to increase the sample size for parasite prevalence by up to 77.7% (95% Bayesian credible intervals 74.7-79.4) for the 2015 Kenya MIS (estimated sample size of children 0-4 years 7218 [7099-7288]), and 54.1% [50.1-56.5] for the 2014-2015 Rwanda DHS (12,220 [11,950-12,410]). This study highlights the importance of defining indicator-relevant sample sizes to achieve the required precision in the current national surveys. While expanding the current surveys would need additional investment, the study highlights the need for improved approaches to cost effective sampling.
Starr, James C.; Torgersen, Christian E.
2015-01-01
We compared the assemblage structure, spatial distributions, and habitat associations of mountain whitefish (Prosopium williamsoni) morphotypes and size classes. We hypothesised that morphotypes would have different spatial distributions and would be associated with different habitat features based on feeding behaviour and diet. Spatially continuous sampling was conducted over a broad extent (29 km) in the Calawah River, WA (USA). Whitefish were enumerated via snorkelling in three size classes: small (10–29 cm), medium (30–49 cm), and large (≥50 cm). We identified morphotypes based on head and snout morphology: a pinocchio form that had an elongated snout and a normal form with a blunted snout. Large size classes of both morphotypes were distributed downstream of small and medium size classes, and normal whitefish were distributed downstream of pinocchio whitefish. Ordination of whitefish assemblages with nonmetric multidimensional scaling revealed that normal whitefish size classes were associated with higher gradient and depth, whereas pinocchio whitefish size classes were positively associated with pool area, distance upstream, and depth. Reach-scale generalised additive models indicated that normal whitefish relative density was associated with larger substrate size in downstream reaches (R2 = 0.64), and pinocchio whitefish were associated with greater stream depth in the reaches farther upstream (R2 = 0.87). These results suggest broad-scale spatial segregation (1–10 km), particularly between larger and more phenotypically extreme individuals. These results provide the first perspective on spatial distributions and habitat relationships of polymorphic mountain whitefish.
Texture-based segmentation of temperate-zone woodland in panchromatic IKONOS imagery
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Bugnet, Pierre; Cavayas, Francois
2003-08-01
We have performed a study to identify optimal texture parameters for woodland segmentation in a highly non-homogeneous urban area from a temperate-zone panchromatic IKONOS image. Texture images are produced with the sum- and difference-histograms depend on two parameters: window size f and displacement step p. The four texture features yielding the best discrimination between classes are the mean, contrast, correlation and standard deviation. The f-p combinations 17-1, 17-2, 35-1 and 35-2 are those which give the best performance, with an average classification rate of 90%.
Class Size Reduction in Practice: Investigating the Influence of the Elementary School Principal
ERIC Educational Resources Information Center
Burch, Patricia; Theoharis, George; Rauscher, Erica
2010-01-01
Class size reduction (CSR) has emerged as a very popular, if not highly controversial, policy approach for reducing the achievement gap. This article reports on findings from an implementation study of class size reduction policy in Wisconsin entitled the Student Achievement Guarantee in Education (SAGE). Drawing on case studies of nine schools,…
What We Have Learned about Class Size Reduction in California. Capstone Report.
ERIC Educational Resources Information Center
Bohrnstedt, George W., Ed.; Stecher, Brian M., Ed.
This final report on the California Class Size Reduction (CSR) initiative summarizes findings from three earlier reports dating back to 1997. Chapter 1 recaps the history of California's CSR initiative and includes a discussion of what state leaders' expectations were when CSR was passed. The chapter also describes research on class-size reduction…
Teacher/Student Interactions in Public Elementary Schools When Class Size is a Factor.
ERIC Educational Resources Information Center
Krieger, Jean D.
This report describes a study designed to discover the nature of teacher-student interactions in regular-size classes with 25 or more students and small-size classes with fewer than 18 students. Eleven public-school primary classrooms were observed, and the interactions between the teachers and students were studied. Verbal and nonverbal…
Serendipitous Policy Implications from Class-Size-Initiated Inquiry: IAQ?
ERIC Educational Resources Information Center
Achilles, C. M.; Prout, Jean; Finn, J. D.; Bobbett, Gordon C.
The level of carbon dioxide in a classroom can have a significant negative effect on teaching and learning. Carbon dioxide (CO2) level is affected by class size and time of day. Six urban schools were studied to characterize the effects of these three factors on different class sizes. Carbon monoxide, CO2, temperature, and relative humidity…
The Cost of Class Size Reduction: Advice for Policymakers. RAND Graduate School Dissertation.
ERIC Educational Resources Information Center
Reichardt, Robert E.
This dissertation provides information to state-level policymakers that will help them avoid two implementation problems seen in the past in California's class-size-reduction (CSR) reform. The first problem was that flat, per student reimbursement did not adequately cover costs in districts with larger pre-CSR class-sizes or smaller schools. The…
Size Matters. The Relevance and Hicksian Surplus of Preferred College Class Size
ERIC Educational Resources Information Center
Mandel, Philipp; Susmuth, Bernd
2011-01-01
The contribution of this paper is twofold. First, we examine the impact of class size on student evaluations of instructor performance using a sample of approximately 1400 economics classes held at the University of Munich from Fall 1998 to Summer 2007. We offer confirmatory evidence for the recent finding of a large, highly significant, and…
Class Size Effects on Reading Achievement Using PIRLS Data: Evidence from Greece
ERIC Educational Resources Information Center
Konstantopoulos, Spyros; Traynor, Anne
2014-01-01
Background/Context: The effects of class size on student achievement have gained considerable attention in education research and policy, especially over the last 30 years. Perhaps the best evidence about the effects of class size thus far has been produced from analyses of Project STAR data, a large-scale experiment where students and teachers…
The Effects of Class Size on Student Achievement in Intermediate Level Elementary Students
ERIC Educational Resources Information Center
McInerney, Melissa
2014-01-01
Class size and student achievement have been debated for decades. The vast amount of research on this topic is either conflicting or inconclusive. There are large and small scale studies that support both sides of this dilemma (Achilles, Nye, Boyd-Zaharias, Fulton, & Cain, 1994; Glass & Smith, 1979; Slavin, 1989). Class size reduction is a…
Class Size and Language Learning in Hong Kong: The Students' Perspective
ERIC Educational Resources Information Center
Harfitt, Gary James
2012-01-01
Background: There is currently ongoing debate in Hong Kong between the teachers' union and the Government on the reduction of large class size (typically more than 40 students) in secondary schools and whether smaller class sizes might facilitate improvements in teaching and learning. In fact, many Hong Kong secondary schools have already started…
ERIC Educational Resources Information Center
Allhusen, Virginia; Belsky, Jay; Booth-LaForce, Cathryn L.; Bradley, Robert; Brownwell, Celia A; Burchinal, Margaret; Campbell, Susan B.; Clarke-Stewart, K. Alison; Cox, Martha; Friedman, Sarah L.; Hirsh-Pasek, Kathryn; Houts, Renate M.; Huston, Aletha; Jaeger, Elizabeth; Johnson, Deborah J.; Kelly, Jean F.; Knoke, Bonnie; Marshall, Nancy; McCartney, Kathleen; Morrison, Frederick J.; O'Brien, Marion; Tresch Owen, Margaret; Payne, Chris; Phillips, Deborah; Pianta, Robert; Randolph, Suzanne M.; Robeson, Wendy W.; Spieker, Susan; Lowe Vandell, Deborah; Weinraub, Marsha
2004-01-01
This study evaluated the extent to which first-grade class size predicted child outcomes and observed classroom processes for 651 children (in separate classrooms). Analyses examined observed child-adult ratios and teacher-reported class sizes. Smaller classrooms showed higher quality instructional and emotional support, although children were…
Class Size and Student Evaluations in Sweden
ERIC Educational Resources Information Center
Westerlund, Joakim
2008-01-01
This paper examines the effect of class size on student evaluations of the quality of an introductory mathematics course at Lund University in Sweden. In contrast to much other studies, we find a large negative, and statistically significant, effect of class size on the quality of the course. This result appears to be quite robust, as almost all…
Bayesian Regression with Network Prior: Optimal Bayesian Filtering Perspective
Qian, Xiaoning; Dougherty, Edward R.
2017-01-01
The recently introduced intrinsically Bayesian robust filter (IBRF) provides fully optimal filtering relative to a prior distribution over an uncertainty class ofjoint random process models, whereas formerly the theory was limited to model-constrained Bayesian robust filters, for which optimization was limited to the filters that are optimal for models in the uncertainty class. This paper extends the IBRF theory to the situation where there are both a prior on the uncertainty class and sample data. The result is optimal Bayesian filtering (OBF), where optimality is relative to the posterior distribution derived from the prior and the data. The IBRF theories for effective characteristics and canonical expansions extend to the OBF setting. A salient focus of the present work is to demonstrate the advantages of Bayesian regression within the OBF setting over the classical Bayesian approach in the context otlinear Gaussian models. PMID:28824268
Reeder, Jens; Giegerich, Robert
2004-01-01
Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6)time and O(n4) space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4) time and O(n2) space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm. PMID:15294028
A Comparison of Techniques for Scheduling Fleets of Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2003-01-01
Earth observing satellite (EOS) scheduling is a complex real-world domain representative of a broad class of over-subscription scheduling problems. Over-subscription problems are those where requests for a facility exceed its capacity. These problems arise in a wide variety of NASA and terrestrial domains and are .XI important class of scheduling problems because such facilities often represent large capital investments. We have run experiments comparing multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on two variants of a realistically-sized model of the EOS scheduling problem. These are implemented as permutation-based methods; methods that search in the space of priority orderings of observation requests and evaluate each permutation by using it to drive a greedy scheduler. Simulated annealing performs best and random mutation operators outperform our squeaky (more intelligent) operator. Furthermore, taking smaller steps towards the end of the search improves performance.
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Multiplex PCR for Rapid Detection of Genes Encoding Class A Carbapenemases
Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo
2012-01-01
In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers. PMID:22950072
Multiplex PCR for rapid detection of genes encoding class A carbapenemases.
Hong, Sang Sook; Kim, Kyeongmi; Huh, Ji Young; Jung, Bochan; Kang, Myung Seo; Hong, Seong Geun
2012-09-01
In recent years, there have been increasing reports of KPC-producing Klebsiella pneumoniae in Korea. The modified Hodge test can be used as a phenotypic screening test for class A carbapenamase (CAC)-producing clinical isolates; however, it does not distinguish between carbapenemase types. The confirmation of type of CAC is important to ensure optimal therapy and to prevent transmission. This study applied a novel multiplex PCR assay to detect and differentiate CAC genes in a single reaction. Four primer pairs were designed to amplify fragments encoding 4 CAC families (SME, IMI/NMC-A, KPC, and GES). The multiplex PCR detected all genes tested for 4 CAC families that could be differentiated by fragment size according to gene type. This multiplex PCR offers a simple and useful approach for detecting and distinguishing CAC genes in carbapenem-resistant strains that are metallo-β-lactamase nonproducers.
Bantis, Leonidas E; Nakas, Christos T; Reiser, Benjamin; Myall, Daniel; Dalrymple-Alford, John C
2017-06-01
The three-class approach is used for progressive disorders when clinicians and researchers want to diagnose or classify subjects as members of one of three ordered categories based on a continuous diagnostic marker. The decision thresholds or optimal cut-off points required for this classification are often chosen to maximize the generalized Youden index (Nakas et al., Stat Med 2013; 32: 995-1003). The effectiveness of these chosen cut-off points can be evaluated by estimating their corresponding true class fractions and their associated confidence regions. Recently, in the two-class case, parametric and non-parametric methods were investigated for the construction of confidence regions for the pair of the Youden-index-based optimal sensitivity and specificity fractions that can take into account the correlation introduced between sensitivity and specificity when the optimal cut-off point is estimated from the data (Bantis et al., Biomet 2014; 70: 212-223). A parametric approach based on the Box-Cox transformation to normality often works well while for markers having more complex distributions a non-parametric procedure using logspline density estimation can be used instead. The true class fractions that correspond to the optimal cut-off points estimated by the generalized Youden index are correlated similarly to the two-class case. In this article, we generalize these methods to the three- and to the general k-class case which involves the classification of subjects into three or more ordered categories, where ROC surface or ROC manifold methodology, respectively, is typically employed for the evaluation of the discriminatory capacity of a diagnostic marker. We obtain three- and multi-dimensional joint confidence regions for the optimal true class fractions. We illustrate this with an application to the Trail Making Test Part A that has been used to characterize cognitive impairment in patients with Parkinson's disease.
He, Xin; Frey, Eric C
2006-08-01
Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.
Risk-Assessment Score and Patient Optimization as Cost Predictors for Ventral Hernia Repair.
Saleh, Sherif; Plymale, Margaret A; Davenport, Daniel L; Roth, John Scott
2018-04-01
Ventral hernia repair (VHR) is associated with complications that significantly increase healthcare costs. This study explores the associations between hospital costs for VHR and surgical complication risk-assessment scores, need for cardiac or pulmonary evaluation, and smoking or obesity counseling. An IRB-approved retrospective study of patients having undergone open VHR over 3 years was performed. Ventral Hernia Risk Score (VHRS) for surgical site occurrence and surgical site infection, and the Ventral Hernia Working Group grade were calculated for each case. Also recorded were preoperative cardiology or pulmonary evaluations, smoking cessation and weight reduction counseling, and patient goal achievement. Hospital costs were obtained from the cost accounting system for the VHR hospitalization stratified by major clinical cost drivers. Univariate regression analyses were used to compare the predictive power of the risk scores. Multivariable analysis was performed to develop a cost prediction model. The mean cost of index VHR hospitalization was $20,700. Total and operating room costs correlated with increasing CDC wound class, VHRS surgical site infection score, VHRS surgical site occurrence score, American Society of Anesthesiologists class, and Ventral Hernia Working Group (all p < 0.01). The VHRS surgical site infection scores correlated negatively with contribution margin (-280; p < 0.01). Multivariable predictors of total hospital costs for the index hospitalization included wound class, hernia defect size, age, American Society of Anesthesiologists class 3 or 4, use of biologic mesh, and 2+ mesh pieces; explaining 73% of the variance in costs (p < 0.001). Weight optimization significantly reduced direct and operating room costs (p < 0.05). Cardiac evaluation was associated with increased costs. Ventral hernia repair hospital costs are more accurately predicted by CDC wound class than VHR risk scores. A straightforward 6-factor model predicted most cost variation for VHR. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Ye, Xiaoting; Sui, Zhongquan
2016-03-01
Changes in the physicochemical properties and starch digestibility of white salted noodles (WSN) at different cooking stage were investigated. The noodles were dried in fresh air and then cooked for 2-12 min by boiling in distilled water to determine the properties of cooking quality, textural properties and optical characteristic. For starch digestibility, dry noodles were milled and sieved into various particle size classes ranging from 0.5 mm to 5.0 mm, and hydrolyzed by porcine pancreatic α-amylase. The optimal cooking time of WSN determined by squeezing between glasses was 6 min. The results showed that the kinetics of solvation of starch and protein molecules were responsible for changes of the physicochemical properties of WSN during cooking. The susceptibility of starch to α-amylase was influenced by the cooking time, particle size and enzyme treatment. The greater value of rapidly digestible starch (RDS) and lower value of slowly digestible starch (SDS) and resistant starch (RS) were reached at the optimal cooking stage ranging between 63.14-71.97%, 2.47-10.74% and 23.94-26.88%, respectively, indicating the susceptibility on hydrolysis by enzyme was important in defining the cooked stage. The study suggested that cooking quality and digestibility were not correlated but the texture greatly controls the digestibility of the noodles. Copyright © 2015 Elsevier B.V. All rights reserved.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
ERIC Educational Resources Information Center
Bettinger, Eric; Doss, Christopher; Loeb, Susanna; Taylor, Eric
2015-01-01
Class size is a first-order consideration in the study of education production and education costs. How larger or smaller classes affect student outcomes is especially relevant to the growth and design of online classes. We study a field experiment in which college students were quasi-randomly assigned to either a large or a small class. All…
45 CFR 1306.32 - Center-based program option.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Center-based program option. (a) Class size. (1) Head Start classes must be staffed by a teacher and an aide or two teachers and, whenever possible, a volunteer. (2) Grantees must determine their class size based on the predominant age of the children who will participate in the class and whether or not a...
Modified centroid for estimating sand, silt, and clay from soil texture class
USDA-ARS?s Scientific Manuscript database
Models that require inputs of soil particle size commonly use soil texture class for input; however, texture classes do not represent the continuum of soil size fractions. Soil texture class and clay percentage are collected as a standard practice for many land management agencies (e.g., NRCS, BLM, ...
Special Education Program Standards Study. Commonwealth of Virginia. Final Technical Report.
ERIC Educational Resources Information Center
Keith, Timothy Z.; And Others
This federally funded study investigated Virginia special education program standards, focusing on local applications of the standards for class size and class mix and the effect of varying class size and class mix on student outcomes. The study concentrated on students with educable mental retardation, severe emotional disturbance, and specific…
The False Promise of Class-Size Reduction
ERIC Educational Resources Information Center
Chingos, Matthew M.
2011-01-01
Class-size reduction, or CSR, is enormously popular with parents, teachers, and the public in general. Many parents believe that their children will benefit from more individualized attention in a smaller class and many teachers find smaller classes easier to manage. The pupil-teacher ratio is an easy statistic for the public to monitor as a…
46 CFR 56.30-10 - Flanged joints (modifies 104.5.1(a)).
Code of Federal Regulations, 2013 CFR
2013-10-01
... addition of a strength fillet weld of the size as shown, may be used in Class I systems not exceeding 750... buttwelding flanges must be provided. For Class II piping systems, the size of the strength fillet may be... void spaces is desirable. For systems of Class II, the size of the strength fillet may be limited to a...
46 CFR 56.30-10 - Flanged joints (modifies 104.5.1(a)).
Code of Federal Regulations, 2014 CFR
2014-10-01
... addition of a strength fillet weld of the size as shown, may be used in Class I systems not exceeding 750... buttwelding flanges must be provided. For Class II piping systems, the size of the strength fillet may be... void spaces is desirable. For systems of Class II, the size of the strength fillet may be limited to a...
46 CFR 56.30-10 - Flanged joints (modifies 104.5.1(a)).
Code of Federal Regulations, 2012 CFR
2012-10-01
... addition of a strength fillet weld of the size as shown, may be used in Class I systems not exceeding 750... buttwelding flanges must be provided. For Class II piping systems, the size of the strength fillet may be... void spaces is desirable. For systems of Class II, the size of the strength fillet may be limited to a...
ERIC Educational Resources Information Center
Lapsley, Daniel K.; Daytner, Katrina M.; Kelly, Ken; Maxwell, Scott E.
This large-scale evaluation of Indiana's Prime Time, a funding mechanism designed to reduce class size or pupil-teacher ratio (PTR) in grades K-3 examined the academic performance of nearly 11,000 randomly selected third graders on the state mandated standardized achievement test as a function of class size, PTR, and presence of an instructional…
The Impact of a Universal Class-Size Reduction Policy: Evidence from Florida's Statewide Mandate
ERIC Educational Resources Information Center
Chingos, Matthew M.
2012-01-01
Class-size reduction (CSR) mandates presuppose that resources provided to reduce class size will have a larger impact on student outcomes than resources that districts can spend as they see fit. I estimate the impact of Florida's statewide CSR policy by comparing the deviations from prior achievement trends in districts that were required to…
Class Size and Sorting in Market Equilibrium: Theory and Evidence. NBER Working Paper No. 13303
ERIC Educational Resources Information Center
Urquiola, Miguel; Verhoogen, Eric
2007-01-01
This paper examines how schools choose class size and how households sort in response to those choices. Focusing on the highly liberalized Chilean education market, we develop a model in which schools are heterogeneous in an underlying productivity parameter, class size is a component of school quality, households are heterogeneous in income and…
The Class Size Question: A Study at Different Levels of Analysis. ACER Research Monograph No. 26.
ERIC Educational Resources Information Center
Larkin, Anthony I.; Keeves, John P.
The purpose of this investigation was to examine the ways in which class size affected other facets of the educational environment of the classroom. The study focused on the commonly found positive relationship between class size and achievement. The most plausible explanation of the evidence seems to involve the effects of grouping more able…
Reducing Class Size: A Smart Way To Improve America's Urban Schools. Second Edition.
ERIC Educational Resources Information Center
Naik, Manish; Casserly, Michael; Uro, Gabriela
The Council of the Great City Schools, a coalition of the largest urban public schools in the United States, surveyed its membership to determine how they were using federal class size reduction funds in the 2000-2001 school year. Thirty-six major urban school systems responded. Results indicate that the federal class size reduction program is…
NASA Technical Reports Server (NTRS)
Kimmel, William M. (Technical Monitor); Bradley, Kevin R.
2004-01-01
This paper describes the development of a methodology for sizing Blended-Wing-Body (BWB) transports and how the capabilities of the Flight Optimization System (FLOPS) have been expanded using that methodology. In this approach, BWB transports are sized based on the number of passengers in each class that must fit inside the centerbody or pressurized vessel. Weight estimation equations for this centerbody structure were developed using Finite Element Analysis (FEA). This paper shows how the sizing methodology has been incorporated into FLOPS to enable the design and analysis of BWB transports. Previous versions of FLOPS did not have the ability to accurately represent or analyze BWB configurations in any reliable, logical way. The expanded capabilities allow the design and analysis of a 200 to 450-passenger BWB transport or the analysis of a BWB transport for which the geometry is already known. The modifications to FLOPS resulted in differences of less than 4 percent for the ramp weight of a BWB transport in this range when compared to previous studies performed by NASA and Boeing.
Microcystin distribution in physical size class separations of natural plankton communities
Graham, J.L.; Jones, J.R.
2007-01-01
Phytoplankton communities in 30 northern Missouri and Iowa lakes were physically separated into 5 size classes (>100 ??m, 53-100 ??m, 35-53 ??m, 10-35 ??m, 1-10 ??m) during 15-21 August 2004 to determine the distribution of microcystin (MC) in size fractionated lake samples and assess how net collections influence estimates of MC concentration. MC was detected in whole water (total) from 83% of takes sampled, and total MC values ranged from 0.1-7.0 ??g/L (mean = 0.8 ??g/L). On average, MC in the > 100 ??m size class comprised ???40% of total MC, while other individual size classes contributed 9-20% to total MC. MC values decreased with size class and were significantly greater in the >100 ??m size class (mean = 0.5 ??g /L) than the 35-53 ??m (mean = 0.1 ??g/L), 10-35 ??m (mean = 0.0 ??g/L), and 1-10 ??m (mean = 0.0 ??g/L) size classes (p < 0.01). MC values in nets with 100-??m, 53-??m, 35-??m, and 10-??m mesh were cumulatively summed to simulate the potential bias of measuring MC with various size plankton nets. On average, a 100-??m net underestimated total MC by 51%, compared to 37% for a 53-??m net, 28% for a 35-??m net, and 17% for a 10-??m net. While plankton nets consistently underestimated total MC, concentration of algae with net sieves allowed detection of MC at low levels (???0.01 ??/L); 93% of lakes had detectable levels of MC in concentrated samples. Thus, small mesh plankton nets are an option for documenting MC occurrence, but whole water samples should be collected to characterize total MC concentrations. ?? Copyright by the North American Lake Management Society 2007.
Wodtke, Geoffrey T.
2016-01-01
This study outlines a theory of social class based on workplace ownership and authority relations, and it investigates the link between social class and growth in personal income inequality since the 1980s. Inequality trends are governed by changes in between-class income differences, changes in the relative size of different classes, and changes in within-class income dispersion. Data from the General Social Survey are used to investigate each of these changes in turn and to evaluate their impact on growth in inequality at the population level. Results indicate that between-class income differences grew by about 60 percent since the 1980s and that the relative size of different classes remained fairly stable. A formal decomposition analysis indicates that changes in the relative size of different social classes had a small dampening effect and that growth in between-class income differences had a large inflationary effect on trends in personal income inequality. PMID:27087695
Wodtke, Geoffrey T
2016-03-01
This study outlines a theory of social class based on workplace ownership and authority relations, and it investigates the link between social class and growth in personal income inequality since the 1980s. Inequality trends are governed by changes in between-class income differences, changes in the relative size of different classes, and changes in within-class income dispersion. Data from the General Social Survey are used to investigate each of these changes in turn and to evaluate their impact on growth in inequality at the population level. Results indicate that between-class income differences grew by about 60% since the 1980s and that the relative size of different classes remained fairly stable. A formal decomposition analysis indicates that changes in the relative size of different social classes had a small dampening effect and that growth in between-class income differences had a large inflationary effect on trends in personal income inequality.
NASA Astrophysics Data System (ADS)
Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.
2016-12-01
The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).
A mathematical framework for the selection of an optimal set of peptides for epitope-based vaccines.
Toussaint, Nora C; Dönnes, Pierre; Kohlbacher, Oliver
2008-12-01
Epitope-based vaccines (EVs) have a wide range of applications: from therapeutic to prophylactic approaches, from infectious diseases to cancer. The development of an EV is based on the knowledge of target-specific antigens from which immunogenic peptides, so-called epitopes, are derived. Such epitopes form the key components of the EV. Due to regulatory, economic, and practical concerns the number of epitopes that can be included in an EV is limited. Furthermore, as the major histocompatibility complex (MHC) binding these epitopes is highly polymorphic, every patient possesses a set of MHC class I and class II molecules of differing specificities. A peptide combination effective for one person can thus be completely ineffective for another. This renders the optimal selection of these epitopes an important and interesting optimization problem. In this work we present a mathematical framework based on integer linear programming (ILP) that allows the formulation of various flavors of the vaccine design problem and the efficient identification of optimal sets of epitopes. Out of a user-defined set of predicted or experimentally determined epitopes, the framework selects the set with the maximum likelihood of eliciting a broad and potent immune response. Our ILP approach allows an elegant and flexible formulation of numerous variants of the EV design problem. In order to demonstrate this, we show how common immunological requirements for a good EV (e.g., coverage of epitopes from each antigen, coverage of all MHC alleles in a set, or avoidance of epitopes with high mutation rates) can be translated into constraints or modifications of the objective function within the ILP framework. An implementation of the algorithm outperforms a simple greedy strategy as well as a previously suggested evolutionary algorithm and has runtimes on the order of seconds for typical problem sizes.
Thin-plate spline analysis of craniofacial growth in Class I and Class II subjects.
Franchi, Lorenzo; Baccetti, Tiziano; Stahl, Franka; McNamara, James A
2007-07-01
To compare the craniofacial growth characteristics of untreated subjects with Class II division 1 malocclusion with those of subjects with normal (Class I) occlusion from the prepubertal through the postpubertal stages of development. The Class II division 1 sample consisted of 17 subjects (11 boys and six girls). The Class I sample also consisted of 17 subjects (13 boys and four girls). Three craniofacial regions (cranial base, maxilla, and mandible) were analyzed on the lateral cephalograms of the subjects in both groups by means of thin-plate spline analysis at T1 (prepubertal) and T2 (postpubertal). Both cross-sectional and longitudinal comparisons were performed on both size and shape differences between the two groups. The results showed an increased cranial base angulation as a morphological feature of Class II malocclusion at the prepubertal developmental phase. Maxillary changes in either shape or size were not significant. Subjects with Class II malocclusion exhibited a significant deficiency in the size of the mandible at the completion of active craniofacial growth as compared with Class I subjects. A significant deficiency in the size of the mandible became apparent in Class II subjects during the circumpubertal period and it was still present at the completion of active craniofacial growth.
Protograph LDPC Codes Over Burst Erasure Channels
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.
Mechanical design of SST-GATE, a dual-mirror telescope for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Dournaux, Jean-Laurent; Huet, Jean-Michel; Amans, Jean-Philippe; Dumas, Delphine; Laporte, Philippe; Sol, Hélène; Blake, Simon
2014-07-01
The Cherenkov Telescope Array (CTA) project aims to create the next generation Very High Energy (VHE) gamma-ray telescope array. It will be devoted to the observation of gamma rays over a wide band of energy, from a few tens of GeV to more than 100 TeV. Two sites are foreseen to view the whole sky where about 100 telescopes, composed of three different classes, related to the specific energy region to be investigated, will be installed. Among these, the Small Size class of Telescopes, SSTs, are devoted to the highest energy region, to beyond 100 TeV. Due to the large number of SSTs, their unit cost is an important parameter. At the Observatoire de Paris, we have designed a prototype of a Small Size Telescope named SST-GATE, based on the dual-mirror Schwarzschild-Couder optical formula, which has never before been implemented in the design of a telescope. Over the last two years, we developed a mechanical design for SST-GATE from the optical and preliminary mechanical designs made by the University of Durham. The integration of this telescope is currently in progress. Since the early stages of mechanical design of SST-GATE, finite element method has been used employing shape and topology optimization techniques to help design several elements of the telescope. This allowed optimization of the mechanical stiffness/mass ratio, leading to a lightweight and less expensive mechanical structure. These techniques and the resulting mechanical design are detailed in this paper. We will also describe the finite element analyses carried out to calculate the mechanical deformations and the stresses in the structure under observing and survival conditions.
ERIC Educational Resources Information Center
Speas, Carol
In 2001-2002, 23 schools in the Wake County Public School System (WCPSS), North Carolina, were provided with 40 teacher positions through the Class Size Reduction Program (CSR). Achievement results for students in reduced class sizes were compared with those of similar students in other CSR schools who did not choose the same grade for the project…
ERIC Educational Resources Information Center
Mitchell, Ross E.
This paper examines the social, political, and economic factors that influenced the adoption and diffusion of early-elementary school class-size-reduction policies at the state level. It applies a neo-institutional framework to explain the rapid spread of class-size reduction policies throughout many state legislatures and boards of education. It…
Class Size and Student Performance at a Public Research University: A Cross-Classified Model
ERIC Educational Resources Information Center
Johnson, Iryna Y.
2010-01-01
This study addresses several methodological problems that have confronted prior research on the effect of class size on student achievement. Unlike previous studies, this analysis accounts for the hierarchical data structure of student achievement, where grades are nested within classes and students, and considers a wide range of class sizes…
77 FR 10724 - Western Pacific Pelagic Fisheries; American Samoa Longline Limited Entry Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-23
... size class falls below the maximum allowed. Six permits are available, as follows: Four in Class A (vessels less than or equal to 40 ft in overall length); and Two in Class D (over 70 ft in overall length... the highest priority to the applicant (for any vessel size class) with the earliest documented...
Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin
2015-10-21
For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal channels in the three different motor imagery BCI paradigms are distinct. From a MI task paradigm, to a two-class control paradigm, and to a four-class control paradigm, the number of required channels for optimizing the classification accuracy increased. These findings may provide useful information to optimize EEG based BCI systems, and further improve the performance of noninvasive BCI.
Logical definability and asymptotic growth in optimization and counting problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Compton, K.
1994-12-31
There has recently been a great deal of interest in the relationship between logical definability and NP-optimization problems. Let MS{sub n} (resp. MP{sub n}) be the class of problems to compute, for given a finite structure A, the maximum number of tuples {bar x} in A satisfying a {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar x}, {bar S}) as {bar S} ranges over predicates on A. Kolaitis and Thakur showed that the classes MS{sub n} and MP{sub n} collapse to a hierarchy of four levels. Papadimitriou and Yannakakis previously showed that problems in the two lowest levels MS{sub 0} andmore » MS{sub 1} (which they called Max Snp and Max Np) are approximable to within a contrast factor in polynomial time. Similarly, Saluja, Subrahmanyam, and Thakur defined SS{sub n} (resp. SP{sub n}) to be the class of problems to compute, for given a finite structure A, the number of tuples ({bar T}, {bar S}) satisfying a given {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar T}, {bar c}) in A. They showed that the classes SS{sub n} and SP{sub n} collapse to a hierarchy of five levels and that problems in the two lowest levels SS{sub 0} and SS{sub 1} have a fully polynomial time randomized approximation scheme. We define extended classes MSF{sub n}, MPF{sub n} SSF{sub n}, and SPF{sub n} by allowing formulae to contain predicates definable in a logic known as least fixpoint logic. The resulting hierarchies classes collapse to the same number of levels and problems in the bottom levels can be approximated as before, but now some problems descend from the highest levels in the original hierarchies to the lowest levels in the new hierarchies. We introduce a method characterizing rates of growth of average solution sizes thereby showing a number of important problems do not belong MSF{sub 1} and SSF{sub 1}. This method is related to limit laws for logics and the probabilistic method from combinatorics.« less
Jiao, Pengfei; Cai, Fei; Feng, Yiding; Wang, Wenjun
2017-08-21
Link predication aims at forecasting the latent or unobserved edges in the complex networks and has a wide range of applications in reality. Almost existing methods and models only take advantage of one class organization of the networks, which always lose important information hidden in other organizations of the network. In this paper, we propose a link predication framework which makes the best of the structure of networks in different level of organizations based on nonnegative matrix factorization, which is called NMF 3 here. We first map the observed network into another space by kernel functions, which could get the different order organizations. Then we combine the adjacency matrix of the network with one of other organizations, which makes us obtain the objective function of our framework for link predication based on the nonnegative matrix factorization. Third, we derive an iterative algorithm to optimize the objective function, which converges to a local optimum, and we propose a fast optimization strategy for large networks. Lastly, we test the proposed framework based on two kernel functions on a series of real world networks under different sizes of training set, and the experimental results show the feasibility, effectiveness, and competitiveness of the proposed framework.
Squish: Near-Optimal Compression for Archival of Relational Datasets
Gao, Yihan; Parameswaran, Aditya
2017-01-01
Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028
Khurana, Rajneet Kaur; Gaspar, Balan Louis; Welsby, Gail; Katare, O P; Singh, Kamalinder K; Singh, Bhupinder
2018-06-01
The current research work encompasses the development, characterization, and evaluation of self-assembled phospholipidic nano-mixed miceller system (SPNMS) of a poorly soluble BCS Class IV xanthone bioactive, mangiferin (Mgf) functionalized with co-delivery of vitamin E TPGS. Systematic optimization using I-optimal design yielded self-assembled phospholipidic nano-micelles with a particle size of < 60 nm and > 80% of drug release in 15 min. The cytotoxicity and cellular uptake studies performed using MCF-7 and MDA-MB-231 cell lines demonstrated greater kill and faster cellular uptake. The ex vivo intestinal permeability revealed higher lymphatic uptake, while in situ perfusion and in vivo pharmacokinetic studies indicated nearly 6.6- and 3.0-folds augmentation in permeability and bioavailability of Mgf. In a nutshell, vitamin E functionalized SPNMS of Mgf improved the biopharmaceutical performance of Mgf in rats for enhanced anticancer potency.
NASA Astrophysics Data System (ADS)
Ravanbakhsh, Ali; Franchini, Sebastián
2012-10-01
In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.
NASA Astrophysics Data System (ADS)
Mangal, S. K.; Sharma, Vivek
2018-02-01
Magneto rheological fluids belong to a class of smart materials whose rheological characteristics such as yield stress, viscosity etc. changes in the presence of applied magnetic field. In this paper, optimization of MR fluid constituents is obtained with on-state yield stress as response parameter. For this, 18 samples of MR fluids are prepared using L-18 Orthogonal Array. These samples are experimentally tested on a developed & fabricated electromagnet setup. It has been found that the yield stress of MR fluid mainly depends on the volume fraction of the iron particles and type of carrier fluid used in it. The optimal combination of the input parameters for the fluid are found to be as Mineral oil with a volume percentage of 67%, iron powder of 300 mesh size with a volume percentage of 32%, oleic acid with a volume percentage of 0.5% and tetra-methyl-ammonium-hydroxide with a volume percentage of 0.7%. This optimal combination of input parameters has given the on-state yield stress as 48.197 kPa numerically. An experimental confirmation test on the optimized MR fluid sample has been then carried out and the response parameter thus obtained has found matching quite well (less than 1% error) with the numerically obtained values.
ERIC Educational Resources Information Center
Ayeni, Olapade Grace; Olowe, Modupe Oluwatoyin
2016-01-01
Large class size is one of the problems in the educational sector that developing nations have been grappling with. Nigeria as a developing nation is no exception. The purpose of this study is to provide views of both lecturers and students on large class size and how it affects teaching and learning in tertiary institutions in Ekiti State of…
Tracey, Sean R.; Pepperell, Julian G.; Domeier, Michael L.; Bennett, Michael B.
2017-01-01
The black marlin (Istiompax indica) is a highly migratory billfish that occupies waters throughout the tropical and subtropical Indo-Pacific. To characterize the vertical habitat use of I. indica, we examined the temperature-depth profiles collected using 102 pop-up satellite archival tags deployed off the east coast of Australia. Modelling of environmental variables revealed location, sea-surface height deviation, mixed layer depth and dissolved oxygen to all be significant predictors of vertical habitat use. Distinct differences in diel movements were observed between the size classes, with larger size classes of marlin (greater than 50 kg) undertaking predictable bounce-diving activity during daylight hours, while diving behaviour of the smallest size class occurred randomly during both day and night. Overall, larger size classes of I. indica were found to use an increased thermal range and spend more time in waters below 150 m than fish of smaller size classes. The differences in the diving behaviour among size classes were suggested to reflect ontogenetic differences in foraging behaviour or physiology. The findings of this study demonstrate, for the first time to our knowledge, ontogenetic differences in vertical habitat in a species of billfish, and further the understanding of pelagic fish ecophysiology in the presence of global environmental change. PMID:29291060
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
Variations in tooth size and arch dimensions in Malay schoolchildren.
Hussein, Khalid W; Rajion, Zainul A; Hassan, Rozita; Noor, Siti Noor Fazliah Mohd
2009-11-01
To compare the mesio-distal tooth sizes and dental arch dimensions in Malay boys and girls with Class I, Class II and Class III malocclusions. The dental casts of 150 subjects (78 boys, 72 girls), between 12 and 16 years of age, with Class I, Class II and Class III malocclusions were used. Each group consisted of 50 subjects. An electronic digital caliper was used to measure the mesio-distal tooth sizes of the upper and lower permanent teeth (first molar to first molar), the intercanine and intermolar widths. The arch lengths and arch perimeters were measured with AutoCAD software (Autodesk Inc., San Rafael, CA, U.S.A.). The mesio-distal dimensions of the upper lateral incisors and canines in the Class I malocclusion group were significantly smaller than the corresponding teeth in the Class III and Class II groups, respectively. The lower canines and first molars were significantly smaller in the Class I group than the corresponding teeth in the Class II group. The lower intercanine width was significantly smaller in the Class II group as compared with the Class I group, and the upper intermolar width was significantly larger in Class III group as compared with the Class II group. There were no significant differences in the arch perimeters or arch lengths. The boys had significantly wider teeth than the girls, except for the left lower second premolar. The boys also had larger upper and lower intermolar widths and lower intercanine width than the girls. Small, but statistically significant, differences in tooth sizes are not necessarily accompanied by significant arch width, arch length or arch perimeter differences. Generally, boys have wider teeth, larger lower intercanine width and upper and lower intermolar widths than girls.
Class Size Reduction and Urban Students. ERIC Digest.
ERIC Educational Resources Information Center
Schwartz, Wendy
Researchers have long investigated whether smaller classes improve student achievement. Their conclusions suggest that class size reduction (CSR) can result in greater in-depth coverage of subject matter by teachers, enhanced learning and stronger engagement by students, more personalized teacher-student relationships, and safer schools with fewer…
Compressed modes for variational problems in mathematics and physics
Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-01-01
This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861
Compressed modes for variational problems in mathematics and physics.
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-11-12
This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.
Phase transitions in restricted Boltzmann machines with generic priors
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Sollich, Peter; Tantari, Daniele
2017-10-01
We study generalized restricted Boltzmann machines with generic priors for units and weights, interpolating between Boolean and Gaussian variables. We present a complete analysis of the replica symmetric phase diagram of these systems, which can be regarded as generalized Hopfield models. We underline the role of the retrieval phase for both inference and learning processes and we show that retrieval is robust for a large class of weight and unit priors, beyond the standard Hopfield scenario. Furthermore, we show how the paramagnetic phase boundary is directly related to the optimal size of the training set necessary for good generalization in a teacher-student scenario of unsupervised learning.
American woodcock winter distribution and fidelity to wintering areas
Diefenbach, D.R.; Derleth, E.L.; Vander Haegen, W. Matthew; Nichols, J.D.; Hines, J.E.
1990-01-01
We examined winter distribution and fidelity to wintering areas for the American Woodcock (Scolopax minor), which exhibits reversed, sexual size dimorphism. Band-recovery data revealed no difference in winter distributions of different age/sex classes for woodcock from the same breeding ares. Similarly, band recoveries from woodcock banded on wintering grounds revealed no difference in fidelity to wintering sites. Males may winter north of a latitude that is optimal for survival based on physiological considerations, but they gain a reproductive advantage if they are among the first to arrive on the breeding grounds. This may explain our results, which indicate males and females have similar distribution patterns during winter.
Heat exchangers in regenerative gas turbine cycles
NASA Astrophysics Data System (ADS)
Nina, M. N. R.; Aguas, M. P. N.
1985-09-01
Advances in compact heat exchanger design and fabrication together with fuel cost rises continuously improve the attractability of regenerative gas turbine helicopter engines. In this study cycle parameters aiming at reduced specific fuel consumption and increased payload or mission range, have been optimized together with heat exchanger type and size. The discussion is based on a typical mission for an attack helicopter in the 900 kw power class. A range of heat exchangers is studied to define the most favorable geometry in terms of lower fuel consumption and minimum engine plus fuel weight. Heat exchanger volume, frontal area ratio and pressure drop effect on cycle efficiency are considered.
ERIC Educational Resources Information Center
Chingos, Matthew M.
2010-01-01
Class-size reduction (CSR) mandates presuppose that resources provided to reduce class size will have a larger impact on student outcomes than resources that districts can spend as they see fit. I estimate the impact of Florida's statewide CSR policy by comparing the deviations from prior achievement trends in districts that were required to…
ERIC Educational Resources Information Center
Maples, Jeffrey B.
2009-01-01
The purpose of this study was to analyze the effects of class size and student achievement in mathematics and reading. The study focused on grades 6 through 8 and used the results of the North Carolina EOG tests in mathematics and reading for the academic year 2006-2007. This study examined the effects of class size and student achievement in…
ERIC Educational Resources Information Center
Galton, Maurice; Pell, Tony
2012-01-01
In a four-year study of the effect of class size on pupil outcomes in a sample of 36 primary schools in Hong Kong, it has been found that there are few positive differences in attainment between classes set at less than 25 pupils and those of normal size averaging 38. Three cohorts of pupils were studied. In Cohort 1 pupils spent 3 years in small…
ERIC Educational Resources Information Center
Ecalle, Jean; Magnan, Annie; Gibert, Fabienne
2006-01-01
This article examines the impact of class size on literacy skills and on literacy interest in beginning readers from zones with specific educational needs in France. The data came from an experiment involving first graders in which teachers and pupils were randomly assigned to the different class types (small classes of 10-12 pupils vs. regular…
The Effect of Large Classes on English Teaching and Learning in Saudi Secondary Schools
ERIC Educational Resources Information Center
Bahanshal, Dalal A.
2013-01-01
The effect of class size on teaching and learning English as a foreign language (EFL) has been through a contentious debate among researchers for a long time. Before the 1950's the concern about the effect of class size and the learning outcomes of students in such classes waned for some time. Yet, researchers have reconsidered the case once again…
Berthaume, Michael A.; Dumont, Elizabeth R.; Godfrey, Laurie R.; Grosse, Ian R.
2014-01-01
Teeth are often assumed to be optimal for their function, which allows researchers to derive dietary signatures from tooth shape. Most tooth shape analyses normalize for tooth size, potentially masking the relationship between relative food item size and tooth shape. Here, we model how relative food item size may affect optimal tooth cusp radius of curvature (RoC) during the fracture of brittle food items using a parametric finite-element (FE) model of a four-cusped molar. Morphospaces were created for four different food item sizes by altering cusp RoCs to determine whether optimal tooth shape changed as food item size changed. The morphospaces were also used to investigate whether variation in efficiency metrics (i.e. stresses, energy and optimality) changed as food item size changed. We found that optimal tooth shape changed as food item size changed, but that all optimal morphologies were similar, with one dull cusp that promoted high stresses in the food item and three cusps that acted to stabilize the food item. There were also positive relationships between food item size and the coefficients of variation for stresses in food item and optimality, and negative relationships between food item size and the coefficients of variation for stresses in the enamel and strain energy absorbed by the food item. These results suggest that relative food item size may play a role in selecting for optimal tooth shape, and the magnitude of these selective forces may change depending on food item size and which efficiency metric is being selected. PMID:25320068
Exploration risks and mineral taxation: how fiscal regimes affect exploration incentives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stauffer, T.R.; Gault, J.C.
1985-01-01
This paper investigates the effects of taxation on exploration risk and establishes certain criteria for an optimal tax on mineral resources, such as oil and gas, where exploration risk (i.e., geological risk) is a key decision variable. The optimization is considered in the context of government ownership of the resource rights, but with an eye to the after-tax incentives perceived by private-sector explorationists. Any government that relies on the private sector for discovery and development must recognize those effects. Taxation affects not only the expected returns from mineral exploration ventures but also the riskiness of such ventures. The potential formore » misdesign is great. The authors show, however, that it is possible, in realistic cases, simultaneously to increase government revenues, improve the explorationist's return, and reduce exploration risk. The opportunity for such improvements arises because most common mineral tax schemes skew the tax burdens across fields of different sizes or qualities. A key consideration in optimizing a tax regime is designing the tax to assign the appropriate burdens to different classes of discoveries. 7 tables.« less
Monaghan, Kieran A.
2016-01-01
Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an error variance around an average deviation. Following standardized protocols and assigning precise reference conditions, the error variance of their comparative ratio (test-site:reference) can be measured and used to estimate the accuracy of the resultant assessment. PMID:27392036
NASA Astrophysics Data System (ADS)
Ungureanu, Constantin; Koning, Gerben A.; van Leeuwen, Ton G.; Manohar, Srirang
2013-05-01
Currently, gold nanorods can be synthesized in a wide range of sizes. However, for the intended biological applications gold nanorods with approximate dimensions 50 nm × 15 nm are used. We investigate by computer simulation the effect of particle dimensions on the optical and thermal properties in the context of the specific applications of photoacoustic imaging. In addition we discuss the influence of particle size in overcoming the following biophysical barriers when administrated in vivo: extravasation, avoidance of uptake by organs of the reticuloendothelial system, penetration through the interstitium, binding capability and uptake by the target cells. Although more complex biological influences can be introduced in future analysis, the present work illustrates that larger gold nanorods, designated by us as ‘nanobig rods’, may perform better at meeting the requirements for successful in vivo applications compared to their smaller counterparts, which are conventionally used.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Ungureanu, Constantin; Koning, Gerben A; van Leeuwen, Ton G; Manohar, Srirang
2013-05-31
Currently, gold nanorods can be synthesized in a wide range of sizes. However, for the intended biological applications gold nanorods with approximate dimensions 50 nm × 15 nm are used. We investigate by computer simulation the effect of particle dimensions on the optical and thermal properties in the context of the specific applications of photoacoustic imaging. In addition we discuss the influence of particle size in overcoming the following biophysical barriers when administrated in vivo: extravasation, avoidance of uptake by organs of the reticuloendothelial system, penetration through the interstitium, binding capability and uptake by the target cells. Although more complex biological influences can be introduced in future analysis, the present work illustrates that larger gold nanorods, designated by us as 'nanobig rods', may perform better at meeting the requirements for successful in vivo applications compared to their smaller counterparts, which are conventionally used.
Pattern formations and optimal packing.
Mityushev, Vladimir
2016-04-01
Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.
Why Does Rebalancing Class-Unbalanced Data Improve AUC for Linear Discriminant Analysis?
Xue, Jing-Hao; Hall, Peter
2015-05-01
Many established classifiers fail to identify the minority class when it is much smaller than the majority class. To tackle this problem, researchers often first rebalance the class sizes in the training dataset, through oversampling the minority class or undersampling the majority class, and then use the rebalanced data to train the classifiers. This leads to interesting empirical patterns. In particular, using the rebalanced training data can often improve the area under the receiver operating characteristic curve (AUC) for the original, unbalanced test data. The AUC is a widely-used quantitative measure of classification performance, but the property that it increases with rebalancing has, as yet, no theoretical explanation. In this note, using Gaussian-based linear discriminant analysis (LDA) as the classifier, we demonstrate that, at least for LDA, there is an intrinsic, positive relationship between the rebalancing of class sizes and the improvement of AUC. We show that the largest improvement of AUC is achieved, asymptotically, when the two classes are fully rebalanced to be of equal sizes.
Connecting in Class? College Class Size and Inequality in Academic Social Capital
ERIC Educational Resources Information Center
Beattie, Irenee R.; Thiele, Megan
2016-01-01
College students who interact with professors and peers about academic matters have better college outcomes. Although institutional factors influence engagement, prior scholarship has not systematically examined whether class sizes affect students' academic interactions, nor whether race or first-generation status moderate such effects. We…
The Allocation of Teachers in Schools--An Alternative to the Class Size Dialogue.
ERIC Educational Resources Information Center
Loader, David N.
1978-01-01
This article looks beyond class size to such specifics as teachers' load, subject electives available, subject load, and different class groupings in developing a flow chart that gives added understanding and control over the variables relating to the deployment of teachers. (Author/IRT)
Making Class Size Work in the Middle Grades
ERIC Educational Resources Information Center
Tienken, C. H.; Achilles, C. M.
2006-01-01
Most research on the positive effects of class-size reduction (CSR) has occurred in the elementary level (Word, Johnston, Bain, Fulton, Zaharias, Lintz, Achilles, Folger, & Breda, 1990; Molnar, Smith, Zahorik, Palmer, Halbach, & Ehrle, 1999). Is CSR an important variable in improving education in the middle grades? Can small classes be…
All We Need Is a Little Class.
ERIC Educational Resources Information Center
Krieger, Jean D.
This study was designed to discover the nature of interactions between effective teachers in regular-sized classes with 25 or more students and small-size classes with fewer than 18 students. Eleven public school primary classrooms were observed, and the interactions between the teacher and students were studied. Verbal and nonverbal interactions…
The Non-Cognitive Returns to Class Size
ERIC Educational Resources Information Center
Dee, Thomas S.; West, Martin R.
2011-01-01
The authors use nationally representative survey data and a research design that relies on contemporaneous within-student and within-teacher comparisons across two academic subjects to estimate how class size affects certain non-cognitive skills in middle school. Their results indicate that smaller eighth-grade classes are associated with…
Optimising the location of antenatal classes.
Tomintz, Melanie N; Clarke, Graham P; Rigby, Janette E; Green, Josephine M
2013-01-01
To combine microsimulation and location-allocation techniques to determine antenatal class locations which minimise the distance travelled from home by potential users. Microsimulation modeling and location-allocation modeling. City of Leeds, UK. Potential users of antenatal classes. An individual-level microsimulation model was built to estimate the number of births for small areas by combining data from the UK Census 2001 and the Health Survey for England 2006. Using this model as a proxy for service demand, we then used a location-allocation model to optimize locations. Different scenarios show the advantage of combining these methods to optimize (re)locating antenatal classes and therefore reduce inequalities in accessing services for pregnant women. Use of these techniques should lead to better use of resources by allowing planners to identify optimal locations of antenatal classes which minimise women's travel. These results are especially important for health-care planners tasked with the difficult issue of targeting scarce resources in a cost-efficient, but also effective or accessible, manner. (169 words). Copyright © 2011 Elsevier Ltd. All rights reserved.
Optimization of a large-scale microseismic monitoring network in northern Switzerland
NASA Astrophysics Data System (ADS)
Kraft, Toni; Mignan, Arnaud; Giardini, Domenico
2013-10-01
We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc = 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to:
We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan et al. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considered.
Nicassio, P M
1977-12-01
A study was conducted to determine the way in which stereotypes of machismo and femininity are associated with family size and perceptions of family planning. A total of 144 adults, male and female, from a lower class and an upper middle class urban area in Colombia were asked to respond to photographs of Colombian families varying in size and state of completeness. The study illustrated the critical role of sex-role identity and sex-role organization as variables having an effect on fertility. The lower-class respondents described parents in the photographs as significantly more macho or feminine because of their children than the upper-middle-class subjects did. Future research should attempt to measure when this drive to sex-role identity is strongest, i.e., when men and women are most driven to reproduce in order to "prove" themselves. Both lower- and upper-middle-class male groups considered male dominance in marriage to be directly linked with family size. Perceptions of the use of family planning decreased linearly with family size for both social groups, although the lower-class females attributed more family planning to spouses of large families than upper-middle-class females. It is suggested that further research deal with the ways in which constructs of machismo and male dominance vary between the sexes and among socioeconomic groups and the ways in which they impact on fertility.
Labra, Fabio A; Hernández-Miranda, Eduardo; Quiñones, Renato A
2015-01-01
We study the temporal variation in the empirical relationships among body size (S), species richness (R), and abundance (A) in a shallow marine epibenthic faunal community in Coliumo Bay, Chile. We also extend previous analyses by calculating individual energy use (E) and test whether its bivariate and trivariate relationships with S and R are in agreement with expectations derived from the energetic equivalence rule. Carnivorous and scavenger species representing over 95% of sample abundance and biomass were studied. For each individual, body size (g) was measured and E was estimated following published allometric relationships. Data for each sample were tabulated into exponential body size bins, comparing species-averaged values with individual-based estimates which allow species to potentially occupy multiple size classes. For individual-based data, both the number of individuals and species across body size classes are fit by a Weibull function rather than by a power law scaling. Species richness is also a power law of the number of individuals. Energy use shows a piecewise scaling relationship with body size, with energetic equivalence holding true only for size classes above the modal abundance class. Species-based data showed either weak linear or no significant patterns, likely due to the decrease in the number of data points across body size classes. Hence, for individual-based size spectra, the SRA relationship seems to be general despite seasonal forcing and strong disturbances in Coliumo Bay. The unimodal abundance distribution results in a piecewise energy scaling relationship, with small individuals showing a positive scaling and large individuals showing energetic equivalence. Hence, strict energetic equivalence should not be expected for unimodal abundance distributions. On the other hand, while species-based data do not show unimodal SRA relationships, energy use across body size classes did not show significant trends, supporting energetic equivalence. PMID:25691966
Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.
Wang, Xinghu; Hong, Yiguang; Ji, Haibo
2016-07-01
The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.
Burruss, Nancy M; Billings, Diane M; Brownrigg, Vicki; Skiba, Diane J; Connors, Helen R
2009-01-01
With the expanding numbers of nursing students enrolled in Web-based courses and the shortage of faculty, class sizes are increasing. This exploratory descriptive study examined class size in relation to the use of technology and to particular educational practices and outcomes. The sample consisted of undergraduate (n = 265) and graduate (n = 863) students enrolled in fully Web-based nursing courses. The Evaluating Educational Uses of Web-based Courses in Nursing survey (Billings, D., Connors, H., Skiba, D. (2001). Benchmarking best practices in Web-based nursing courses. Advances in Nursing Science, 23, 41--52) and the Social Presence Scale (Gunawardena, C. N., Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. The American Journal of Distance Education, 11, 9-26.) were used to gather data about the study variables. Class sizes were defined as very small (1 to 10 students), small (11 to 20 students), medium (21 to 30 students), large (31 to 40 students), and very large (41 students and above). Descriptive and inferential statistics were used to analyze the data. There were significant differences by class size in students' perceptions of active participation in learning, student-faculty interaction, peer interaction, and connectedness. Some differences by class size between undergraduate and graduate students were also found, and these require further study.
Nguyen, Huong Minh; Kang, Changwon
2014-02-01
Bacteriophage T7 terminator Tϕ is a class I intrinsic terminator coding for an RNA hairpin structure immediately followed by oligo(U), which has been extensively studied in terms of its transcription termination mechanism, but little is known about its physiological or regulatory functions. In this study, using a T7 mutant phage, where a 31-bp segment of Tϕ was deleted from the genome, we discovered that deletion of Tϕ from T7 reduces the phage burst size but delays lysis timing, both of which are disadvantageous for the phage. The burst downsizing could directly result from Tϕ deletion-caused upregulation of gene 17.5, coding for holin, among other Tϕ downstream genes, because infection of gp17.5-overproducing Escherichia coli by wild-type T7 phage showed similar burst downsizing. However, the lysis delay was not associated with cellular levels of holin or lysozyme or with rates of phage adsorption. Instead, when allowed to evolve spontaneously in five independent adaptation experiments, the Tϕ-lacking mutant phage, after 27 or 29 passages, recovered both burst size and lysis time reproducibly by deleting early genes 0.5, 0.6, and 0.7 of class I, among other mutations. Deletion of genes 0.5 to 0.7 from the Tϕ-lacking mutant phage decreased expression of several Tϕ downstream genes to levels similar to that of the wild-type phage. Accordingly, phage T7 lysis timing is associated with cellular levels of Tϕ downstream gene products. This suggests the involvement of unknown factor(s) besides the known lysis proteins, lysozyme and holin, and that Tϕ plays a role of optimizing burst size and lysis time during T7 infection. IMPORTANCE Bacteriophages are bacterium-infecting viruses. After producing numerous progenies inside bacteria, phages lyse bacteria using their lysis protein(s) to get out and start a new infection cycle. Normally, lysis is tightly controlled to ensure phage progenies are maximally produced and released at an optimal time. Here, we have discovered that phage T7, besides employing its known lysis proteins, additionally uses its transcription terminator Tϕ to guarantee the optimal lysis of the E. coli host. Tϕ, positioned in the middle of the T7 genome, must be inactivated at least partially to allow for transcription-driven translocation of T7 DNA into hosts and expression of Tϕ downstream but promoter-lacking genes. What role is played by Tϕ before inactivation? Without Tϕ, not only was lysis time delayed but also the number of progenies was reduced in this study. Furthermore, T7 can overcome Tϕ deletion by further deleting some genes, highlighting that a phage has multiple strategies for optimizing lysis.
ERIC Educational Resources Information Center
Galton, Maurice; Pell, Tony
2012-01-01
This paper describes changes which took place in 37 Hong Kong primary schools where class sizes were reduced from 38 to between 20 and 25. Chinese, English and mathematics classes were observed over three years from Primary 1 (aged 6) to Primary 3. For 75% of observations no child was the focus of the teacher's attention in large classes. Reducing…
Early Implementation of the Class Size Reduction Initiative.
ERIC Educational Resources Information Center
Illig, David C.
A survey of school districts was conducted to determine the initial progress and problems associated with the 1997 Class Size Reduction (CSR) Initiative. Data reveal that most school districts had enough space for smaller classes for at least two grade levels; small school districts were much less likely to report space constraints. The CSR did…
Student Ratings of Instruction: Examining the Role of Academic Field, Course Level, and Class Size
ERIC Educational Resources Information Center
Laughlin, Anne M.
2014-01-01
This dissertation investigated the relationship between course characteristics and student ratings of instruction at a large research intensive university. Specifically, it examined the extent to which academic field, course level, and class size were associated with variation in mean class ratings. Past research consistently identifies…
The Influence of Small Class Size, Duration, Intensity, and Heterogeneity on Head Start Fade
ERIC Educational Resources Information Center
Huss, Christopher D.
2010-01-01
The researcher conducted a nonexperimental study to investigate and analyze the influence of reduced class sizes, intensity (all day and every day), duration (five years), and heterogeneity (random class assignment) on the Head Start Fade effect. The researcher employed retrospective data analysis using a longitudinal explanatory design on data…
The Relationship of Class Size Effects and Teacher Salary
ERIC Educational Resources Information Center
Peevely, Gary; Hedges, Larry; Nye, Barbara A.
2005-01-01
The effects of class size on academic achievement have been studied for decades. Although the results of small-scale, randomized experiments and large-scale, econometric studies point to positive effects of small classes, some scholars see the evidence as ambiguous. Recent analyses from a 4-year, large-scale, randomized experiment on the effects…
Class Size: Teachers' Perspectives
ERIC Educational Resources Information Center
Watson, Kevin; Handal, Boris; Maher, Marguerite
2016-01-01
A consistent body of research shows that large classes have been perceived by teachers as an obstacle to deliver quality teaching. This large-scale study sought to investigate further those differential effects by asking 1,119 teachers from 321 K-12 schools in New South Wales (Australia) their perceptions of ideal class size for a variety of…
Elsherif, Noha Ibrahim; Shamma, Rehab Nabil; Abdelbary, Ghada
2017-02-01
Treating a nail infection like onychomycosis is challenging as the human nail plate acts as a formidable barrier against all drug permeation. Available oral and topical treatments have several setbacks. Terbinafine hydrochloride (TBH), belonging to the allylamine class, is mainly used for treatment of onychomycosis. This study aims to formulate TBH in a nanobased spanlastic vesicular carrier that enables and enhances the drug delivery through the nail. The nanovesicles were formulated by ethanol injection method, using either Span® 60 or Span® 65, together with Tween 80 or sodium deoxycholate as an edge activator. A full factorial design was implemented to study the effect of different formulation and process variables on the prepared TBH-loaded spanlastic nanovesicles. TBH entrapment efficiency percentages, particle size diameter, percentage drug released after 2 h and 8 h were selected as dependent variables. Optimization was performed using Design-Expert® software to obtain an optimized formulation with high entrapment efficiency (62.35 ± 8.91%), average particle size of 438.45 ± 70.5 nm, and 29.57 ± 0.93 and 59.53 ± 1.73% TBH released after 2 and 8 h, respectively. The optimized formula was evaluated using differential scanning calorimetry and X-ray diffraction and was also morphologically examined using transmission electron microscopy. An ex vivo study was conducted to determine the permeation and retainment of the optimized formulation in a human cadaver nail plate, and confocal laser scanning microscope was used to show the extent of formulation permeation. In conclusion, the results confirmed that spanlastics exhibit promising results for the trans-ungual delivery of TBH.
A study on the impact of prioritising emergency department arrivals on the patient waiting time.
Van Bockstal, Ellen; Maenhout, Broos
2018-05-03
In the past decade, the crowding of the emergency department has gained considerable attention of researchers as the number of medical service providers is typically insufficient to fulfil the demand for emergency care. In this paper, we solve the stochastic emergency department workforce planning problem and consider the planning of nurses and physicians simultaneously for a real-life case study in Belgium. We study the patient arrival pattern of the emergency department in depth and consider different patient acuity classes by disaggregating the arrival pattern. We determine the personnel staffing requirements and the design of the shifts based on the patient arrival rates per acuity class such that the resource staffing cost and the weighted patient waiting time are minimised. In order to solve this multi-objective optimisation problem, we construct a Pareto set of optimal solutions via the -constraints method. For a particular staffing composition, the proposed model minimises the patient waiting time subject to upper bounds on the staffing size using the Sample Average Approximation Method. In our computational experiments, we discern the impact of prioritising the emergency department arrivals. Triaging results in lower patient waiting times for higher priority acuity classes and to a higher waiting time for the lowest priority class, which does not require immediate care. Moreover, we perform a sensitivity analysis to verify the impact of the arrival and service pattern characteristics, the prioritisation weights between different acuity classes and the incorporated shift flexibility in the model.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
NASA Astrophysics Data System (ADS)
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
Power laws, discontinuities and regional city size distributions
Garmestani, A.S.; Allen, Craig R.; Gallagher, C.M.
2008-01-01
Urban systems are manifestations of human adaptation to the natural environment. City size distributions are the expression of hierarchical processes acting upon urban systems. In this paper, we test the entire city size distributions for the southeastern and southwestern United States (1990), as well as the size classes in these regions for power law behavior. We interpret the differences in the size of the regional city size distributions as the manifestation of variable growth dynamics dependent upon city size. Size classes in the city size distributions are snapshots of stable states within urban systems in flux.
Varying Radii of On-Axis Anode Hollows For kJ-Class Dense Plasma Focus
NASA Astrophysics Data System (ADS)
Shaw, Brian; Chapman, Steven; Falabella, Steven; Pankin, Alexei; Liu, Jason; Link, Anthony; Schmidt, Andréa
2017-10-01
A dense plasma focus (DPF) is a compact plasma gun that produces high energy ion beams, up to several MeV, through strong potential gradients. Motivated by particle-in-cell simulations, we have tried a series of hollow anodes on our kJ-class DPF. Each anode has varying hollow sizes, and has been studied to optimize ion beam production in Helium, reduce anode sputter, and increase neutron yields in deuterium. We diagnose the rate at which electrode material is ablated and deposited onto nearby surfaces. This is of interest in the case of solid targets, which perform poorly in the presence of sputter. We have found that the larger the hollow radius produces more energetic ion beams, higher neutron yield, and sputter less than a flat top anode. A complete comparison is presented. This work was prepared by LLNL under Contract DE-AC52-07NA27344 and supported by Office of Defense Nuclear Nonproliferation Research and Development within U.S. Department of Energy's National Nuclear Security Administration.
Cífková, Eva; Hájek, Roman; Lísa, Miroslav; HolĿapek, Michal
2016-03-25
The goal of this work is a systematic optimization of hydrophilic interaction liquid chromatography (HILIC) separation of acidic lipid classes (namely phosphatidic acids-PA, lysophosphatidic acids-LPA, phosphatidylserines-PS and lysophosphatidylserines-LPS) and other lipid classes under mass spectrometry (MS) compatible conditions. The main parameters included in this optimization are the type of stationary phases used in HILIC, pH of the mobile phase, the type and concentration of mobile phase additives. Nine HILIC columns with different chemistries (unmodified silica, modified silica using diol, 2-picolylamine, diethylamine and 1-aminoanthracene and hydride silica) are compared with the emphasis on peak shapes of acidic lipid classes. The optimization of pH is correlated with the theoretical calculation of acidobasic equilibria of studied lipid classes. The final method using the hydride column, pH 4 adjusted by formic acid and the gradient of acetonitrile and 40 mmol/L of aqueous ammonium formate provides good peak shapes for all analyzed lipid classes including acidic lipids. This method is applied for the identification of lipids in real samples of porcine brain and kidney extracts. Copyright © 2016 Elsevier B.V. All rights reserved.
Factors associated with long-term species composition in dry tropical forests of Central India
NASA Astrophysics Data System (ADS)
Agarwala, M.; DeFries, R. S.; Qureshi, Q.; Jhala, Y. V.
2016-10-01
The long-term future of species composition in forests depends on regeneration. Many factors can affect regeneration, including human use, environmental conditions, and species’ traits. This study examines the influence of these factors in a tropical deciduous forest of Central India, which is heavily used by local, forest-dependent residents for livestock grazing, fuel-wood extraction, construction and other livelihood needs. We measure size-class proportions (the ratio of abundance of a species at a site in a higher size class to total abundance in both lower and higher size classes) for 39 tree species across 20 transects at different intensities of human use. The size-class proportions for medium to large trees and for small to medium-sized trees were negatively associated with species that are used for local construction, while size class proportions for saplings to small trees were positively associated with those species that are fire resistant and negatively associated with livestock density. Results indicate that grazing and fire prevent non-fire resistant species from reaching reproductive age, which can alter the long term composition and future availability of species that are important for local use and ecosystem services. Management efforts to reduce fire and forest grazing could reverse these impacts on long-term forest composition.
ERIC Educational Resources Information Center
Parks-Stamm, Elizabeth J.; Zafonte, Maria; Palenque, Stephanie M.
2017-01-01
Student participation in online discussion forums is associated with positive outcomes for student achievement and satisfaction, but research findings on the impact of class size and instructors' participation on student participation have been mixed. The present study analyzed the frequency of instructor and student posts in asynchronous…
A Descriptive Evaluation of the Federal Class-Size Reduction Program: Final Report
ERIC Educational Resources Information Center
Millsap, Mary Ann; Giancola, Jennifer; Smith, W. Carter; Hunt, Dana; Humphrey, Daniel C.; Wechsler, Marjorie E.; Riehl, Lori M.
2004-01-01
The federal Class-Size Reduction (CSR) Program, P.L. 105-277, begun in Fiscal Year 1999, represented a major federal commitment to help school districts hire additional qualified teachers, especially in the early elementary grades, so children would learn in smaller classes. The CSR program also allowed funds to be spent as professional…
Class-Size Effects on Adolescents' Mental Health and Well-Being in Swedish Schools
ERIC Educational Resources Information Center
Jakobsson, Niklas; Persson, Mattias; Svensson, Mikael
2013-01-01
This paper analyzes whether class size has an effect on the prevalence of mental health problems and well-being among adolescents in Swedish schools. We use cross-sectional data collected in year 2008 covering 2755 Swedish adolescents in ninth grade from 40 schools and 159 classes. We utilize different econometric approaches to address potential…
Utilizing Online Education in Florida to Meet Mandated Class Size Limitations
ERIC Educational Resources Information Center
Mattox, Kari Ann
2012-01-01
With the passage of a state constitutional amendment in 2002, Florida school districts faced the challenge of meeting class size mandates in core subjects, such as mathematics, English, and science by the 2010-2011 school year, or face financial penalties. Underpinning the amendment's goals was the argument that smaller classes are more effective…
ERIC Educational Resources Information Center
Dieterle, Steven
2012-01-01
Prior research has established the potential for achievement gains from attending smaller classes. However, large statewide class-size reduction (CSR) policies have not been found to consistently realize such gains. A leading explanation for the disappointing performance of CSR policies is that schools are forced to hire additional teachers of…
Class Size: Can School Districts Capitalize on the Benefits of Smaller Classes?
ERIC Educational Resources Information Center
Hertling, Elizabeth; Leonard, Courtney; Lumsden, Linda; Smith, Stuart C.
2000-01-01
This report is intended to help policymakers understand the benefits of class-size reduction (CSR). It assesses the costs of CSR, considers some research-based alternatives, and explores strategies that will help educators realize the benefits of CSR when it is implemented. It examines how CSR enhances student achievement, such as when the…
Class Size Reduction: Lessons Learned from Experience. Policy Brief No. Twenty-Three.
ERIC Educational Resources Information Center
McRobbie, Joan; Finn, Jeremy D.; Harman, Patrick
New federal proposals have fueled national interest in class-size reduction (CSR). However, CSR raises numerous concerns, some of which are addressed in this policy brief. The text draws on the experiences of states and districts that have implemented CSR. The brief addresses the following 15 concerns: Do small classes in and of themselves affect…
Experimental Estimates of the Impacts of Class Size on Test Scores: Robustness and Heterogeneity
ERIC Educational Resources Information Center
Ding, Weili; Lehrer, Steven F.
2011-01-01
Proponents of class size reductions (CSRs) draw heavily on the results from Project Student/Teacher Achievement Ratio to support their initiatives. Adding to the political appeal of these initiative are reports that minority and economically disadvantaged students received the largest benefits from smaller classes. We extend this research in two…
The single mirror small size telescope (SST-1M) of the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Aguilar, J. A.; Bilnik, W.; Borkowski, J.; Cadoux, F.; Christov, A.; della Volpe, D.; Favre, Y.; Heller, M.; Kasperek, J.; Lyard, E.; Marszałek, A.; Moderski, R.; Montaruli, T.; Porcelli, A.; Prandini, E.; Rajda, P.; Rameez, M.; Schioppa, E., Jr.; Troyano Pujadas, I.; Zietara, K.; Blocki, J.; Bogacz, L.; Bulik, T.; Frankowski, A.; Grudzinska, M.; Idźkowski, B.; Jamrozy, M.; Janiak, M.; Lalik, K.; Mach, E.; Mandat, D.; Michałowski, J.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Paśko, P.; Pech, M.; Schovanek, P.; Seweryn, K.; Skowron, K.; Sliusar, V.; Stawarz, L.; Stodulska, M.; Stodulski, M.; Toscano, S.; Walter, R.; WiÈ©cek, M.; Zagdański, A.
2016-07-01
The Small Size Telescope with Single Mirror (SST-1M) is one of the proposed types of Small Size Telescopes (SST) for the Cherenkov Telescope Array (CTA). The CTA south array will be composed of about 100 telescopes, out of which about 70 are of SST class, which are optimized for the detection of gamma rays in the energy range from 5 TeV to 300 TeV. The SST-1M implements a Davies-Cotton optics with a 4 m dish diameter with a field of view of 9°. The Cherenkov light produced in atmospheric showers is focused onto a 88 cm wide hexagonal photo-detection plane, composed of 1296 custom designed large area hexagonal silicon photomultipliers (SiPM) and a fully digital readout and trigger system. The SST-1M camera has been designed to provide high performance in a robust as well as compact and lightweight design. In this contribution, we review the different steps that led to the realization of the telescope prototype and its innovative camera.
Fourier spatial frequency analysis for image classification: training the training set
NASA Astrophysics Data System (ADS)
Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart
2016-04-01
The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.
Optimal Shakedown of the Thin-Wall Metal Structures Under Strength and Stiffness Constraints
NASA Astrophysics Data System (ADS)
Alawdin, Piotr; Liepa, Liudas
2017-06-01
Classical optimization problems of metal structures confined mainly with 1st class cross-sections. But in practice it is common to use the cross-sections of higher classes. In this paper, a new mathematical model for described shakedown optimization problem for metal structures, which elements are designed from 1st to 4th class cross-sections, under variable quasi-static loads is presented. The features of limited plastic redistribution of forces in the structure with thin-walled elements there are taken into account. Authors assume the elastic-plastic flexural buckling in one plane without lateral torsional buckling behavior of members. Design formulae for Methods 1 and 2 for members are analyzed. Structures stiffness constrains are also incorporated in order to satisfy the limit serviceability state requirements. With the help of mathematical programming theory and extreme principles the structure optimization algorithm is developed and justified with the numerical experiment for the metal plane frames.
NASA Astrophysics Data System (ADS)
Sandrik, Suzannah
Optimal solutions to the impulsive circular phasing problem, a special class of orbital maneuver in which impulsive thrusts shift a vehicle's orbital position by a specified angle, are found using primer vector theory. The complexities of optimal circular phasing are identified and illustrated using specifically designed Matlab software tools. Information from these new visualizations is applied to explain discrepancies in locally optimal solutions found by previous researchers. Two non-phasing circle-to-circle impulsive rendezvous problems are also examined to show the applicability of the tools developed here to a broader class of problems and to show how optimizing these rendezvous problems differs from the circular phasing case.
Kuwaiti, Ahmed Al
2015-01-01
This study aims at investigating the effect of response rate and class size interaction on students' evaluation of instructors and the courses offered at heath science colleges in Saudi Arabia. A retrospective study design was adapted to ascertain Course Evaluation Surveys (CES) conducted at the health science colleges of the University of Dammam [UOD] in the academic year 2013-2014. Accordingly, the CES data which was downloaded from an exclusive online application 'UDQUEST' which includes 337 different courses and 15,264 surveys were utilized in this study. Two-way analysis of variance was utilized to test whether there is any significant interaction between the class size and the response rate on the students' evaluation of courses and instructors. The study showed that high response rate is required for student evaluation of instructors at Health Science colleges when the class size is small whereas a medium response rate is required for students' evaluation of courses. On the other hand, when the class size is medium, a medium or high response rate is needed for students' evaluation of both instructors and courses. The results of this study recommend that the administrators of the health science colleges to be aware of the interpretation of students' evaluations of courses and instructors. The study also suggests that the interaction between response rate and class size is a very important factor that needs to be taken into consideration while interpreting the findings of the students' evaluation of instructors and courses.
Optimizing Linked Perceptual Class Formation and Transfer of Function
ERIC Educational Resources Information Center
Fields, Lanny; Garruto, Michelle
2009-01-01
A linked perceptual class consists of two distinct perceptual classes, A' and B', the members of which have become related to each other. For example, a linked perceptual class might be composed of many pictures of a woman (one perceptual class) and the sounds of that woman's voice (the other perceptual class). In this case, any sound of the…
Fast Solution in Sparse LDA for Binary Classification
NASA Technical Reports Server (NTRS)
Moghaddam, Baback
2010-01-01
An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.
Rethinking police training policies: large class sizes increase risk of police sexual misconduct.
Reingle Gonzalez, Jennifer M; Bishopp, Stephen A; Jetelina, Katelyn K
2016-09-01
The limited research on police sexual misconduct (PSM), a common form of police misconduct, suggests that no evidence-based strategies for prevention are available for use by police departments. To identify new avenues for prevention, we critically evaluated 'front-end' police recruiting, screening, hiring and training procedures. Internal Affairs records were linked with administrative reports and police academy graduation data for officers accused of sexual assault or misconduct between 1994 and 2014. Logistic and proportional hazards regression methods were used to identify predictors of discharge for sustained allegations of PSM and time to discharge, respectively. Officer's graduating class size was positively associated with odds of discharge for PSM. For every one-officer increase in class size, the rate of discharge for PSM increased by 9% [hazard ratio (HR) = 1.09, P < 0.01]. For particularly large classes (>35 graduates), discharge rates were at least four times greater than smaller classes (HR = 4.43, P < 0.05). Large class sizes and more annual graduates increase rates of PSM. Officer recruitment strategies or training quality may be compromised during periods of intensive hiring. Trainee to instructor ratios or maximum class sizes may be instituted by academies to ensure that all police trainees receive the required supervision, one-on-one training, feedback and attention necessary to maximize public safety. © The Author 2015. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mapped Plot Patch Size Estimates
Paul C. Van Deusen
2005-01-01
This paper demonstrates that the mapped plot design is relatively easy to analyze and describes existing formulas for mean and variance estimators. New methods are developed for using mapped plots to estimate average patch size of condition classes. The patch size estimators require assumptions about the shape of the condition class, limiting their utility. They may...
Energetic constraints, size gradients, and size limits in benthic marine invertebrates.
Sebens, Kenneth P
2002-08-01
Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.
NASA Astrophysics Data System (ADS)
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
Quantum algorithm for energy matching in hard optimization problems
NASA Astrophysics Data System (ADS)
Baldwin, C. L.; Laumann, C. R.
2018-06-01
We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.
Improved mine blast algorithm for optimal cost design of water distribution systems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon
2015-12-01
The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.
Adaptability of laser diffraction measurement technique in soil physics methodology
NASA Astrophysics Data System (ADS)
Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András
2016-04-01
There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters (e.g. pH(H2O), organic carbon and calcium carbonate content).
Optimal city size and population density for the 21st century.
Speare A; White, M J
1990-10-01
The thesis that large scale urban areas result in greater efficiency, reduced costs, and a better quality of life is reexamined. The environmental and social costs are measured for different scales of settlement. The desirability and perceived problems of a particular place are examined in relation to size of place. The consequences of population decline are considered. New York city is described as providing both opportunities in employment, shopping, and cultural activities as well as a high cost of living, crime, and pollution. The historical development of large cities in the US is described. Immigration has contributed to a greater concentration of population than would have otherwise have occurred. The spatial proximity of goods and services argument (agglomeration economies) has changed with advancements in technology such as roads, trucking, and electronic communication. There is no optimal city size. The overall effect of agglomeration can be assessed by determining whether the markets for goods and labor are adequate to maximize well-being and balance the negative and positive aspects of urbanization. The environmental costs of cities increase with size when air quality, water quality, sewage treatment, and hazardous waste disposal is considered. Smaller scale and lower density cities have the advantages of a lower concentration of pollutants. Also, mobilization for program support is easier with homogenous population. Lower population growth in large cities would contribute to a higher quality of life, since large metropolitan areas have a concentration of immigrants, younger age distributions, and minority groups with higher than average birth rates. The negative consequences of decline can be avoided if reduction of population in large cities takes place gradually. For example, poorer quality housing can be removed for open space. Cities should, however, still attract all classes of people with opportunities equally available.
Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik
2012-12-01
Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less
Parametric Mass Modeling for Mars Entry, Descent and Landing System Analysis Study
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Komar, D. R.
2011-01-01
This paper provides an overview of the parametric mass models used for the Entry, Descent, and Landing Systems Analysis study conducted by NASA in FY2009-2010. The study examined eight unique exploration class architectures that included elements such as a rigid mid-L/D aeroshell, a lifting hypersonic inflatable decelerator, a drag supersonic inflatable decelerator, a lifting supersonic inflatable decelerator implemented with a skirt, and subsonic/supersonic retro-propulsion. Parametric models used in this study relate the component mass to vehicle dimensions and mission key environmental parameters such as maximum deceleration and total heat load. The use of a parametric mass model allows the simultaneous optimization of trajectory and mass sizing parameters.
ERIC Educational Resources Information Center
Goldfinch, Judy
1996-01-01
A study compared the effectiveness of two methods (medium-size class instruction and large lectures with tutorial sessions) for teaching mathematics and statistics to first-year business students. Students and teachers overwhelmingly preferred the medium-size class method, which produced higher exam scores but had no significant effect on…
ERIC Educational Resources Information Center
Boniface, Russell; Protheroe, Nancy
Class-size reduction (CSR) has been a complex and contentious issue for the last quarter century. Although the small-class concept was adopted because it appealed to common sense, research over time has revealed a mix of confounding variables, instead of a definitive conclusion. Some CSR efforts, such as Tennessee's Project STAR and Wisconsin's…
Classification of stellar spectra with SVM based on within-class scatter and between-class scatter
NASA Astrophysics Data System (ADS)
Liu, Zhong-bao; Zhou, Fang-xiao; Qin, Zhen-tao; Luo, Xue-gang; Zhang, Jing
2018-07-01
Support Vector Machine (SVM) is a popular data mining technique, and it has been widely applied in astronomical tasks, especially in stellar spectra classification. Since SVM doesn't take the data distribution into consideration, and therefore, its classification efficiencies can't be greatly improved. Meanwhile, SVM ignores the internal information of the training dataset, such as the within-class structure and between-class structure. In view of this, we propose a new classification algorithm-SVM based on Within-Class Scatter and Between-Class Scatter (WBS-SVM) in this paper. WBS-SVM tries to find an optimal hyperplane to separate two classes. The difference is that it incorporates minimum within-class scatter and maximum between-class scatter in Linear Discriminant Analysis (LDA) into SVM. These two scatters represent the distributions of the training dataset, and the optimization of WBS-SVM ensures the samples in the same class are as close as possible and the samples in different classes are as far as possible. Experiments on the K-, F-, G-type stellar spectra from Sloan Digital Sky Survey (SDSS), Data Release 8 show that our proposed WBS-SVM can greatly improve the classification accuracies.
Reliability and agreement in student ratings of the class environment.
Nelson, Peter M; Christ, Theodore J
2016-09-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Future of Small- and Medium-Sized Communities in the Prairie Region.
ERIC Educational Resources Information Center
Wellar, Barry S., Ed.
Four papers are featured. The first is a statistical overview and analysis of past, present and future happenings to small communities in the Region; it focuses on two indicators: (1) population growth or declining community class size and, (2) the changing distribution of commercial outlets by community class size. The other three papers report…
Channel catfish Ictalurus punctatus size and feed conversion ratio
USDA-ARS?s Scientific Manuscript database
Channel catfish, Ictalurus punctatus, of five size-classes were stocked into 20, 0.04-ha earthen ponds at a rate of 14,826 fish/ha. Mean initial weights for each size-class were 0.232, 0.458, 0.678, 0.911, and 1.10 kg/fish. Four ponds were randomly allotted to each treatment. A commercial 28% protei...
Convex Optimization over Classes of Multiparticle Entanglement
NASA Astrophysics Data System (ADS)
Shang, Jiangwei; Gühne, Otfried
2018-02-01
A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].
Efficiency of quantum vs. classical annealing in nonconvex learning problems
Zecchina, Riccardo
2018-01-01
Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764
Power Allocation Based on Data Classification in Wireless Sensor Networks
Wang, Houlian; Zhou, Gongbo
2017-01-01
Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamaluy, Denis; Gao, Xujiao; Tierney, Brian David
We created a highly efficient, universal 3D quant um transport simulator. We demonstrated that the simulator scales linearly - both with the problem size (N) and number of CPUs, which presents an important break-through in the field of computational nanoelectronics. It allowed us, for the first time, to accurately simulate and optim ize a large number of realistic nanodevices in a much shorter time, when compared to other methods/codes such as RGF[%7EN 2.333 ]/KNIT, KWANT, and QTBM[%7EN 3 ]/NEMO5. In order to determine the best-in-class for different beyond-CMOS paradigms, we performed rigorous device optimization for high-performance logic devices at 6-,more » 5- and 4-nm gate lengths. We have discovered that there exists a fundamental down-scaling limit for CMOS technology and other Field-Effect Transistors (FETs). We have found that, at room temperatures, all FETs, irre spective of their channel material, will start experiencing unacceptable level of thermally induced errors around 5-nm gate lengths.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingel, Andreas; Sendzik, Martin; Huang, Ying
2017-01-12
PRC2 is a multisubunit methyltransferase involved in epigenetic regulation of early embryonic development and cell growth. The catalytic subunit EZH2 methylates primarily lysine 27 of histone H3, leading to chromatin compaction and repression of tumor suppressor genes. Inhibiting this activity by small molecules targeting EZH2 was shown to result in antitumor efficacy. Here, we describe the optimization of a chemical series representing a new class of PRC2 inhibitors which acts allosterically via the trimethyllysine pocket of the noncatalytic EED subunit. Deconstruction of a larger and complex screening hit to a simple fragment-sized molecule followed by structure-guided regrowth and careful propertymore » modulation were employed to yield compounds which achieve submicromolar inhibition in functional assays and cellular activity. The resulting molecules can serve as a simplified entry point for lead optimization and can be utilized to study this new mechanism of PRC2 inhibition and the associated biology in detail.« less
Nguyen, Thi-Tham; Van Le, Duc; Yoon, Seokhoon
2014-01-01
This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class. PMID:24608009
Nguyen, Thi-Tham; Le, Duc Van; Yoon, Seokhoon
2014-03-07
This paper proposes a practical low-complexity MAC (medium access control) scheme for quality of service (QoS)-aware and cluster-based underwater acoustic sensor networks (UASN), in which the provision of differentiated QoS is required. In such a network, underwater sensors (U-sensor) in a cluster are divided into several classes, each of which has a different QoS requirement. The major problem considered in this paper is the maximization of the number of nodes that a cluster can accommodate while still providing the required QoS for each class in terms of the PDR (packet delivery ratio). In order to address the problem, we first estimate the packet delivery probability (PDP) and use it to formulate an optimization problem to determine the optimal value of the maximum packet retransmissions for each QoS class. The custom greedy and interior-point algorithms are used to find the optimal solutions, which are verified by extensive simulations. The simulation results show that, by solving the proposed optimization problem, the supportable number of underwater sensor nodes can be maximized while satisfying the QoS requirements for each class.
NASA Astrophysics Data System (ADS)
Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore
2017-10-01
Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.
Birth order, family size, and intelligence.
Belmont, L; Marolla, F A
1973-12-14
The relation of birth order and family size to intellectual performance, as measured by the Raven Progressive Matrices, was examined among nearly all of 400,000 19-year-old males born in the Netherlands in 1944 through 1947. It was found that birth order and family size had independent effects on intellectual performance. Effects of family size were not present in all social classes, but effects of birth order were consistent across social class.
Hu, Yue-Hua; Kitching, Roger L.; Lan, Guo-Yu; Zhang, Jiao-Lin; Sha, Li-Qing; Cao, Min
2014-01-01
We have investigated the processes of community assembly using size classes of trees. Specifically our work examined (1) whether point process models incorporating an effect of size-class produce more realistic summary outcomes than do models without this effect; (2) which of three selected models incorporating, respectively environmental effects, dispersal and the joint-effect of both of these, is most useful in explaining species-area relationships (SARs) and point dispersion patterns. For this evaluation we used tree species data from the 50-ha forest dynamics plot in Barro Colorado Island, Panama and the comparable 20 ha plot at Bubeng, Southwest China. Our results demonstrated that incorporating an size-class effect dramatically improved the SAR estimation at both the plots when the dispersal only model was used. The joint effect model produced similar improvement but only for the 50-ha plot in Panama. The point patterns results were not improved by incorporation of size-class effects using any of the three models. Our results indicate that dispersal is likely to be a key process determining both SARs and point patterns. The environment-only model and joint-effects model were effective at the species level and the community level, respectively. We conclude that it is critical to use multiple summary characteristics when modelling spatial patterns at the species and community levels if a comprehensive understanding of the ecological processes that shape species’ distributions is sought; without this results may have inherent biases. By influencing dispersal, the effect of size-class contributes to species assembly and enhances our understanding of species coexistence. PMID:25251538
Rius, Cristina; Attaf, Meriem; Tungatt, Katie; Bianchi, Valentina; Legut, Mateusz; Bovay, Amandine; Donia, Marco; Thor Straten, Per; Peakman, Mark; Svane, Inge Marie; Ott, Sascha; Connor, Tom; Szomolay, Barbara; Dolton, Garry; Sewell, Andrew K
2018-04-01
Peptide-MHC (pMHC) multimers, usually used as streptavidin-based tetramers, have transformed the study of Ag-specific T cells by allowing direct detection, phenotyping, and enumeration within polyclonal T cell populations. These reagents are now a standard part of the immunology toolkit and have been used in many thousands of published studies. Unfortunately, the TCR-affinity threshold required for staining with standard pMHC multimer protocols is higher than that required for efficient T cell activation. This discrepancy makes it possible for pMHC multimer staining to miss fully functional T cells, especially where low-affinity TCRs predominate, such as in MHC class II-restricted responses or those directed against self-antigens. Several recent, somewhat alarming, reports indicate that pMHC staining might fail to detect the majority of functional T cells and have prompted suggestions that T cell immunology has become biased toward the type of cells amenable to detection with multimeric pMHC. We use several viral- and tumor-specific pMHC reagents to compare populations of human T cells stained by standard pMHC protocols and optimized protocols that we have developed. Our results confirm that optimized protocols recover greater populations of T cells that include fully functional T cell clonotypes that cannot be stained by regular pMHC-staining protocols. These results highlight the importance of using optimized procedures that include the use of protein kinase inhibitor and Ab cross-linking during staining to maximize the recovery of Ag-specific T cells and serve to further highlight that many previous quantifications of T cell responses with pMHC reagents are likely to have considerably underestimated the size of the relevant populations. Copyright © 2018 The Authors.
High performance interconnection between high data rate networks
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, K.; Overstreet, C. M.; Zhang, L.; Sun, W.
1992-01-01
The bridge/gateway system needed to interconnect a wide range of computer networks to support a wide range of user quality-of-service requirements is discussed. The bridge/gateway must handle a wide range of message types including synchronous and asynchronous traffic, large, bursty messages, short, self-contained messages, time critical messages, etc. It is shown that messages can be classified into three basic classes, synchronous and large and small asynchronous messages. The first two require call setup so that packet identification, buffer handling, etc. can be supported in the bridge/gateway. Identification enables resequences in packet size. The third class is for messages which do not require call setup. Resequencing hardware based to handle two types of resequencing problems is presented. The first is for a virtual parallel circuit which can scramble channel bytes. The second system is effective in handling both synchronous and asynchronous traffic between networks with highly differing packet sizes and data rates. The two other major needs for the bridge/gateway are congestion and error control. A dynamic, lossless congestion control scheme which can easily support effective error correction is presented. Results indicate that the congestion control scheme provides close to optimal capacity under congested conditions. Under conditions where error may develop due to intervening networks which are not lossless, intermediate error recovery and correction takes 1/3 less time than equivalent end-to-end error correction under similar conditions.
Integrated controls-structures design methodology development for a class of flexible spacecraft
NASA Technical Reports Server (NTRS)
Maghami, P. G.; Joshi, S. M.; Walz, J. E.; Armstrong, E. S.
1990-01-01
Future utilization of space will require large space structures in low-Earth and geostationary orbits. Example missions include: Earth observation systems, personal communication systems, space science missions, space processing facilities, etc., requiring large antennas, platforms, and solar arrays. The dimensions of such structures will range from a few meters to possibly hundreds of meters. For reducing the cost of construction, launching, and operating (e.g., energy required for reboosting and control), it will be necessary to make the structure as light as possible. However, reducing structural mass tends to increase the flexibility which would make it more difficult to control with the specified precision in attitude and shape. Therefore, there is a need to develop a methodology for designing space structures which are optimal with respect to both structural design and control design. In the current spacecraft design practice, it is customary to first perform the structural design and then the controller design. However, the structural design and the control design problems are substantially coupled and must be considered concurrently in order to obtain a truly optimal spacecraft design. For example, let C denote the set of the 'control' design variables (e.g., controller gains), and L the set of the 'structural' design variables (e.g., member sizes). If a structural member thickness is changed, the dynamics would change which would then change the control law and the actuator mass. That would, in turn, change the structural model. Thus, the sets C and L depend on each other. Future space structures can be roughly divided into four mission classes. Class 1 missions include flexible spacecraft with no articulated appendages which require fine attitude pointing and vibration suppression (e.g., large space antennas). Class 2 missions consist of flexible spacecraft with articulated multiple payloads, where the requirement is to fine-point the spacecraft and each individual payload while suppressing the elastic motion. Class 3 missions include rapid slewing of spacecraft without appendages, while Class 4 missions include general nonlinear motion of a flexible spacecraft with articulated appendages and robot arms. Class 1 and 2 missions represent linear mathematical modeling and control system design problems (except for actuator and sensor nonlinearities), while Class 3 and 4 missions represent nonlinear problems. The development of an integrated controls/structures design approach for Class 1 missions is addressed. The performance for these missions is usually specified in terms of (1) root mean square (RMS) pointing errors at different locations on the structure, and (2) the rate of decay of the transient response. Both of these performance measures include the contributions of rigid as well as elastic motion.
ERIC Educational Resources Information Center
Munoz, Marco A.; Portes, Pedro R.
A class size reduction (CSR) program was implemented in a large low-performing urban elementary school district. The CSR program helps schools improve student learning by hiring additional teachers so that children in the early elementary grades can attend smaller classes. This study used a participant-oriented evaluation model to examine the…
The optimization of peptide cargo bound to MHC class I molecules by the peptide-loading complex.
Elliott, Tim; Williams, Anthony
2005-10-01
Major histocompatibility complex (MHC) class I complexes present peptides from both self and foreign intracellular proteins on the surface of most nucleated cells. The assembled heterotrimeric complexes consist of a polymorphic glycosylated heavy chain, non-polymorphic beta(2) microglobulin, and a peptide of typically nine amino acids in length. Assembly of the class I complexes occurs in the endoplasmic reticulum and is assisted by a number of chaperone molecules. A multimolecular unit termed the peptide-loading complex (PLC) is integral to this process. The PLC contains a peptide transporter (transporter associated with antigen processing), a thiooxido-reductase (ERp57), a glycoprotein chaperone (calreticulin), and tapasin, a class I-specific chaperone. We suggest that class I assembly involves a process of optimization where the peptide cargo of the complex is edited by the PLC. Furthermore, this selective peptide loading is biased toward peptides that have a longer off-rate from the assembled complex. We suggest that tapasin is the key chaperone that directs this action of the PLC with secondary contributions from calreticulin and possibly ERp57. We provide a framework model for how this may operate at the molecular level and draw parallels with the proposed mechanism of action of human leukocyte antigen-DM for MHC class II complex optimization.
Volpe, Joseph M; Ward, Douglas J; Napolitano, Laura; Phung, Pham; Toma, Jonathan; Solberg, Owen; Petropoulos, Christos J; Walworth, Charles M
2015-01-01
Transmitted HIV-1 exhibiting reduced susceptibility to protease and reverse transcriptase inhibitors is well documented but limited for integrase inhibitors and enfuvirtide. We describe here a case of transmitted 5 drug class-resistance in an antiretroviral (ARV)-naïve patient who was successfully treated based on the optimized selection of an active ARV drug regimen. The value of baseline resistance testing to determine an optimal ARV treatment regimen is highlighted in this case report. © The Author(s) 2015.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
40 CFR 144.28 - Requirements for Class I, II, and III wells authorized by rule.
Code of Federal Regulations, 2010 CFR
2010-07-01
... proposed test or measurement to be made; (D) The amount, size, and location (by depth) of casing to be left..., internal pressure, and axial loading; (iv) Hole size; (v) Size and grade of all casing strings; and (vi... Class III wells the owner or operator shall provide to the Director a qualitative analysis and ranges in...
Determining stocking, forest type and stand-size class from forest inventory data
Mark H. Hansen; Jerold T. Hahn
1992-01-01
This paper describes the procedures used by North Central Forest Experiment Station's Forest Inventory and Analysis Work Unit (NCFIA) in determining stocking, forest type, and stand-size class. The stocking procedure assigns a portion of the stocking to individual trees measured on NCFIA 10-point field plots. Stand size and forest type are determined as functions...
Shape Comparison Between 0.4–2.0 and 20–60 lm Cement Particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzer, L.; Flatt, R; Erdogan, S
Portland cement powder, ground from much larger clinker particles, has a particle size distribution from about 0.1 to 100 {micro}m. An important question is then: does particle shape depend on particle size? For the same cement, X-ray computed tomography has been used to examine the 3-D shape of particles in the 20-60 {micro}m sieve range, and focused ion beam nanotomography has been used to examine the 3-D shape of cement particles found in the 0.4-2.0 {micro}m sieve range. By comparing various kinds of computed particle shape data for each size class, the conclusion is made that, within experimental uncertainty, bothmore » size classes are prolate, but the smaller size class particles, 0.4-2.0 {micro}m, tend to be somewhat more prolate than the 20-60 {micro}m size class. The practical effect of this shape difference on the set-point was assessed using the Virtual Cement and Concrete Testing Laboratory to simulate the hydration of five cement powders. Results indicate that nonspherical aspect ratio is more important in determining the set-point than are the actual shape details.« less
An introduction to the COLIN optimization interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, William Eugene
2003-03-01
We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.
Optimization of soil stabilization with class C fly ash.
DOT National Transportation Integrated Search
1987-01-01
Previous Iowa DOT sponsored research has shown that some Class : C fly ashes are cementitious (because calcium is combined as calcium : aluminates) while other Class C ashes containing similar amounts of : elemental calcium are not (1). Fly ashes fro...
Two Classes of New Optimal Asymmetric Quantum Codes
NASA Astrophysics Data System (ADS)
Chen, Xiaojing; Zhu, Shixin; Kai, Xiaoshan
2018-03-01
Let q be an even prime power and ω be a primitive element of F_{q2}. By analyzing the structure of cyclotomic cosets, we determine a sufficient condition for ω q- 1-constacyclic codes over F_{q2} to be Hermitian dual-containing codes. By the CSS construction, two classes of new optimal AQECCs are obtained according to the Singleton bound for AQECCs.
Comparing the performance of various digital soil mapping approaches to map physical soil properties
NASA Astrophysics Data System (ADS)
Laborczi, Annamária; Takács, Katalin; Pásztor, László
2015-04-01
Spatial information on physical soil properties is intensely expected, in order to support environmental related and land use management decisions. One of the most widely used properties to characterize soils physically is particle size distribution (PSD), which determines soil water management and cultivability. According to their size, different particles can be categorized as clay, silt, or sand. The size intervals are defined by national or international textural classification systems. The relative percentage of sand, silt, and clay in the soil constitutes textural classes, which are also specified miscellaneously in various national and/or specialty systems. The most commonly used is the classification system of the United States Department of Agriculture (USDA). Soil texture information is essential input data in meteorological, hydrological and agricultural prediction modelling. Although Hungary has a great deal of legacy soil maps and other relevant soil information, it often occurs, that maps do not exist on a certain characteristic with the required thematic and/or spatial representation. The recent developments in digital soil mapping (DSM), however, provide wide opportunities for the elaboration of object specific soil maps (OSSM) with predefined parameters (resolution, accuracy, reliability etc.). Due to the simultaneous richness of available Hungarian legacy soil data, spatial inference methods and auxiliary environmental information, there is a high versatility of possible approaches for the compilation of a given soil map. This suggests the opportunity of optimization. For the creation of an OSSM one might intend to identify the optimum set of soil data, method and auxiliary co-variables optimized for the resources (data costs, computation requirements etc.). We started comprehensive analysis of the effects of the various DSM components on the accuracy of the output maps on pilot areas. The aim of this study is to compare and evaluate different digital soil mapping methods and sets of ancillary variables for producing the most accurate spatial prediction of texture classes in a given area of interest. Both legacy and recently collected data on PSD were used as reference information. The predictor variable data set consisted of digital elevation model and its derivatives, lithology, land use maps as well as various bands and indices of satellite images. Two conceptionally different approaches can be applied in the mapping process. Textural classification can be realized after particle size data were spatially extended by proper geostatistical method. Alternatively, the textural classification is carried out first, followed by the spatial extension through suitable data mining method. According to the first approach, maps of sand, silt and clay percentage have been computed through regression kriging (RK). Since the three maps are compositional (their sum must be 100%), we applied Additive Log-Ratio (alr) transformation, instead of kriging them independently. Finally, the texture class map has been compiled according to the USDA categories from the three maps. Different combinations of reference and training soil data and auxiliary covariables resulted several different maps. On the basis of the other way, the PSD were classified firstly into the USDA categories, then the texture class maps were compiled directly by data mining methods (classification trees and random forests). The various results were compared to each other as well as to the RK maps. The performance of the different methods and data sets has been examined by testing the accuracy of the geostatistically computed and the directly classified results to assess the most predictive and accurate method. Acknowledgement: Our work was supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Class IA phosphoinositide 3-kinase regulates heart size and physiological cardiac hypertrophy.
Luo, Ji; McMullen, Julie R; Sobkiw, Cassandra L; Zhang, Li; Dorfman, Adam L; Sherwood, Megan C; Logsdon, M Nicole; Horner, James W; DePinho, Ronald A; Izumo, Seigo; Cantley, Lewis C
2005-11-01
Class I(A) phosphoinositide 3-kinases (PI3Ks) are activated by growth factor receptors, and they regulate, among other processes, cell growth and organ size. Studies using transgenic mice overexpressing constitutively active and dominant negative forms of the p110alpha catalytic subunit of class I(A) PI3K have implicated the role of this enzyme in regulating heart size and physiological cardiac hypertrophy. To further understand the role of class I(A) PI3K in controlling heart growth and to circumvent potential complications from the overexpression of dominant negative and constitutively active proteins, we generated mice with muscle-specific deletion of the p85alpha regulatory subunit and germ line deletion of the p85beta regulatory subunit of class I(A) PI3K. Here we show that mice with cardiac deletion of both p85 subunits exhibit attenuated Akt signaling in the heart, reduced heart size, and altered cardiac gene expression. Furthermore, exercise-induced cardiac hypertrophy is also attenuated in the p85 knockout hearts. Despite such defects in postnatal developmental growth and physiological hypertrophy, the p85 knockout hearts exhibit normal contractility and myocardial histology. Our results therefore provide strong genetic evidence that class I(A) PI3Ks are critical regulators for the developmental growth and physiological hypertrophy of the heart.
Wong, C Y; Wu, E; Wong, T Y
2007-01-01
Information asymmetry has been offered as a reason for unnecessarily high costs in certain industries where significant information asymmetry traditionally exists between providers and consumers, such as healthcare. The purpose of this paper is to examine the impact of the introduction of publishing of bill size as a means to reduce healthcare costs. Specifically, we aim to examine if this initiative to decrease information asymmetry on healthcare prices between healthcare providers and patients, and between healthcare providers themselves, will lead to lower prices for patients. Bill size data of 29 commonly occurring diagnosis-related groups (DRGs) for two ward classes (B2 and C) over a 16- month period were studied. Each ward class was studied separately, i.e. involving 58 DRG data sets. The mean bill size data as well as that of 50th and 90th percentile bill sizes were examined. The study involved some 46,000 inpatient episodes which occurred in the five public sector acute general hospitals of Singapore. Mean prices dropped by 4.14 percent and 9.64 percent for B2 and C classes, respectively. 50 out of 58 DRG data sets showed a drop in prices. Bill sizes at the 50th percentile dropped by 7.95 percent and 10.12 percent for B2 and C classes, respectively; while at the 90th percentile, the corresponding figures were decreases of 8.01 percent and 11.4 percent for the two ward classes. The act of publishing bill sizes has led to less information asymmetry among providers, thereby facilitating more competitive behaviour among hospitals and lower bill sizes.
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-11-22
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.
Zhang, Huaguang; Song, Ruizhuo; Wei, Qinglai; Zhang, Tieyan
2011-12-01
In this paper, a novel heuristic dynamic programming (HDP) iteration algorithm is proposed to solve the optimal tracking control problem for a class of nonlinear discrete-time systems with time delays. The novel algorithm contains state updating, control policy iteration, and performance index iteration. To get the optimal states, the states are also updated. Furthermore, the "backward iteration" is applied to state updating. Two neural networks are used to approximate the performance index function and compute the optimal control policy for facilitating the implementation of HDP iteration algorithm. At last, we present two examples to demonstrate the effectiveness of the proposed HDP iteration algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liengsawangwong, Raweewan; Yu, T.-K.; Sun, T.-L.
2007-11-01
Background: The purpose of this study was to determine whether the use of optimized CT treatment planning offered better coverage of axillary level III (LIII)/supraclavicular (SC) targets than the empirically derived dose prescription that are commonly used. Materials/Methods: Thirty-two consecutive breast cancer patients who underwent CT treatment planning of a SC field were evaluated. Each patient was categorized according to body mass index (BMI) classes: normal, overweight, or obese. The SC and LIII nodal beds were contoured, and four treatment plans for each patient were generated. Three of the plans used empiric dose prescriptions, and these were compared with amore » CT-optimized plan. Each plan was evaluated by two criteria: whether 98% of target volume receive >90% of prescribed dose and whether < 5% of the irradiated volume received 105% of prescribed dose. Results: The mean depth of SC and LIII were 3.2 cm (range, 1.4-6.7 cm) and 3.1 (range, 1.7-5.8 cm). The depth of these targets varied according across BMI classes (p = 0.01). Among the four sets of plans, the CT-optimized plans were the most successful at achieving both of the dosimetry objectives for every BMI class (normal BMI, p = .003; overweight BMI, p < .0001; obese BMI, p < .001). Conclusions: Across all BMI classes, routine radiation prescriptions did not optimally cover intended targets for every patient. Optimized CT-based treatment planning generated the most successful plans; therefore, we recommend the use of routine CT simulation and treatment planning of SC fields in breast cancer.« less
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Generalized Scalar-on-Image Regression Models via Total Variation.
Wang, Xiao; Zhu, Hongtu
2017-01-01
The use of imaging markers to predict clinical outcomes can have a great impact in public health. The aim of this paper is to develop a class of generalized scalar-on-image regression models via total variation (GSIRM-TV), in the sense of generalized linear models, for scalar response and imaging predictor with the presence of scalar covariates. A key novelty of GSIRM-TV is that it is assumed that the slope function (or image) of GSIRM-TV belongs to the space of bounded total variation in order to explicitly account for the piecewise smooth nature of most imaging data. We develop an efficient penalized total variation optimization to estimate the unknown slope function and other parameters. We also establish nonasymptotic error bounds on the excess risk. These bounds are explicitly specified in terms of sample size, image size, and image smoothness. Our simulations demonstrate a superior performance of GSIRM-TV against many existing approaches. We apply GSIRM-TV to the analysis of hippocampus data obtained from the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset.
Effect of finite sample size on feature selection and classification: a simulation study.
Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping
2010-02-01
The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.
A framework for modeling and optimizing dynamic systems under uncertainty
Nicholson, Bethany; Siirola, John
2017-11-11
Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less
A framework for modeling and optimizing dynamic systems under uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Bethany; Siirola, John
Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less
Kalb, Bradley W.; Huntsman, Brock M.; Caldwell, Colleen A.; Bozek, Michael A.
2018-01-01
The positioning of fishes within a riverscape is dependent on the proximity of complementary habitats. In this study, foraging and non-foraging habitat were quantified monthly over an entire year for a rainbow trout (Oncorhynchus mykiss) population in an isolated, headwater stream in southcentral New Mexico. The stream follows a seasonal thermal and hydrologic pattern typical for a Southwestern stream and was deemed suitable for re-introduction of the native and close relative, Rio Grande cutthroat trout (O. clarkii virginalis). However, uncertainty associated with limited habitat needed to be resolved if repatriation of the native fish was to be successful. Habitat was evaluated using resource selection functions with a mechanistic drift-foraging model to explain trout distributions. Macroinvertebrate drift was strongly season- and temperature-dependent (lower in winter and spring, higher in summer and fall). Models identified stream depth as the most limiting factor for habitat selection across seasons and size-classes. Additionally, positions closer to cover were selected during the winter by smaller size-classes (0, 1, 2), while net energy intake was important during the spring for most size-classes (0, 1, 2, 3). Drift-foraging models identified that 81% of observed trout selected positions that could meet maintenance levels throughout the year. Moreover, 40% of selected habitats could sustain maximum growth. Stream positions occupied by rainbow trout were more energetically profitable than random sites regardless of season or size-class. Larger size-classes (3, 4+) were energetically more limited throughout the year than were smaller size-classes. This research suggests that habitat in the form of deep pools is of paramount importance for rainbow trout or native cutthroat trout.
Earl W. Lathrop; Chris Osborne; Anna Rochester; Kevin Yeung; Samuel Soret; Rochelle Hopper
1991-01-01
Size class distribution of Quercus engelmannii (Engelmann oak) on the Santa Rosa Plateau was studied to understand whether current recruitment of young oaks is sufficient to maintain the population in spite of high natural mortality and impacts of development in some portions of the plateau woodland. Sapling-size oaks (1-10 cm dbh) made up 5.56 pct...
Ou, Jian de; Wu, Zhi Zhuang; Luo, Ning
2016-10-01
In order to clarify the effects of forest gap size on the growth and stem form quality of Taxus wallichina var. mairei and effectiveness of the precious timbers cultivation, 25 sample plots in Cunninghamia lanceolata forest gaps were established in Mingxi County, Fujian Province, China to determine the indices of the growth, stem form and branching indices of T. wallichina var. mairei seedlings. The relationships between the gap size and growth, stem form and branching were investigated. The 25 sample plots were located at five microhabitats which were classified based on gap size as follows: Class1, 2, 3, 4 and 5, which had a gap size of 25-50 m 2 , 50-75 m 2 , 75-100 m 2 , 100-125 m 2 and 125-150 m 2 , respectively. The evaluation index system of precious timbers was built by using hierarchical analysis. The 5 classes of forest gaps were evaluated comprehensively by using the multiobjective decision making method. The results showed that gap size significantly affected 11 indices, i.e., height, DBH, crown width, forking rate, stem straightness, stem fullness, taperingness, diameter height ratio, height under living branch, interval between branches, and max-branch base diameter. Class1and 2 both significantly promoted the growth of height, DBH and crown width, and both significantly inhibited forking rate and taperingness, and improved stem straightness. Class2 significantly improved stem fullness and diameter height ratio. Class1and 2 significantly improved height under living branch and reduced max-branch base diameter. Class 1 significantly increased interval between branches. Class1and2 significantly improved the comprehensive evaluation score of precious timbers. This study suggested that controlled cutting intensity could be used to create forest gaps of 25-75 m 2 , which improved the precious timber cultivating process of T. wallichina var. mairei in C. lanceolata forests.
NASA Astrophysics Data System (ADS)
Johnson, E. R.; Rowland, R. D.; Protokowicz, J.; Inamdar, S. P.; Kan, J.; Vargas, R.
2016-12-01
Extreme storm events have tremendous erosive energy which is capable of mobilizing vast amounts of material from watershed sources into fluvial systems. This complex mixture of sediment and particulate organic matter (POM) is a nutrient source, and has the potential to impact downstream water quality. The impact of POM on receiving aquatic systems can vary not only by the total amount exported but also by the various sources involved and the particle sizes of POM. This study examines the composition of POM in potential sources and within-event POM by: (1) determining the amount and quality of dissolved organic matter (DOM) that can be leached from coarse, medium and fine particle classes; (2) assessing the C and N content and isotopic character of within-event POM; and (3) coupling physical and chemical properties to evaluate storm event POM influence on stream water. Storm event POM samples and source sediments were collected from a forested headwater catchment (second order stream) in the Piedmont region of Maryland. Samples were sieved into three particle classes - coarse (2mm-1mm), medium (1mm-250µm) and fine (<250µm). Extractions were performed for three particle class sizes and the resulting fluorescent organic matter was analyzed. Carbon (C) and Nitrogen (N) amount, C:N ratio, and isotopic analysis of 13C and 15N were performed on solid state event and source material. Future work will include examination of microbial communities associated with POM particle size classes. Physical size class separation of within-event POM exhibited differences in C:N ratios, δ15N composition, and extracted DOM lability. Smaller size classes exhibited lower C:N ratios, more enriched δ15N and more recalcitrant properties in leached DOM. Source material had varying C:N ratios and contributions to leached DOM. These results indicate that both source and size class strongly influence the POM contribution to fluvial systems during large storm events.
Liu, Yan-Jun; Tong, Shaocheng
2016-11-01
In this paper, we propose an optimal control scheme-based adaptive neural network design for a class of unknown nonlinear discrete-time systems. The controlled systems are in a block-triangular multi-input-multi-output pure-feedback structure, i.e., there are both state and input couplings and nonaffine functions to be included in every equation of each subsystem. The design objective is to provide a control scheme, which not only guarantees the stability of the systems, but also achieves optimal control performance. The main contribution of this paper is that it is for the first time to achieve the optimal performance for such a class of systems. Owing to the interactions among subsystems, making an optimal control signal is a difficult task. The design ideas are that: 1) the systems are transformed into an output predictor form; 2) for the output predictor, the ideal control signal and the strategic utility function can be approximated by using an action network and a critic network, respectively; and 3) an optimal control signal is constructed with the weight update rules to be designed based on a gradient descent method. The stability of the systems can be proved based on the difference Lyapunov method. Finally, a numerical simulation is given to illustrate the performance of the proposed scheme.
49 CFR 175.704 - Plutonium shipments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... lower cargo compartment in the aft-most location that is possible for cargo of its size and weight, and... aboard an aircraft carrying other cargo required to bear any of the following labels: Class 1 (all Divisions), Class 2 (all Divisions), Class 3, Class 4 (all Divisions), Class 5 (all Divisions), or Class 8...
Harvesting, predation and competition effects on a red coral population
NASA Astrophysics Data System (ADS)
Abbiati, M.; Buffoni, G.; Caforio, G.; Di Cola, G.; Santangelo, G.
A Corallium rubrum population, dwelling in the Ligurian Sea, has been under observation since 1987. Biometric descriptors of colonies (base diameter, weight, number of polyps, number of growth rings) have been recorded and correlated. The population size structure was obtained by distributing the colonies into diameter classes, each size class representing the average annual increment of diameter growth. The population was divided into ten classes, including a recruitment class. This size structure showed a fairly regular trend in the first four classes. The irregularity of survival in the older classes agreed with field observations on harvesting and predation. Demographic parameters such as survival, growth plasticity and natality coefficients were estimated from the experimental data. On this basis a discrete nonlinear model was implemented. The model is based on a kind of density-dependent Leslie matrix, where the feedback term only occurs in survival of the first class; the recruitment function is assumed to be dependent on the total biomass and related to inhibiting effects due to competitive interactions. Stability analysis was applied to steady-state solutions. Numerical simulations of population evolution were carried out under different conditions. The dynamics of settlement and the effects of disturbances such as harvesting, predation and environmental variability were studied.
NASA Astrophysics Data System (ADS)
Pfannkuche, O.; Soltwedel, T.
1998-12-01
In the context of the European OMEX Programme this investigation focused on gradients in the biomass and activity of the small benthic size spectrum along a transect across the Goban Spur from the outer Celtic Sea into Porcupine Abyssal Plain. The effects of food pulses (seasonal, episodic) on this part of the benthic size spectrum were investigated. Sediments sampled during eight expeditions at different seasons covering a range from 200 m to 4800 m water depth were assayed with biochemical bulk measurements: determinations of chloroplastic pigment equivalents (CPE), the sum of chlorophyll a and its breakdown products, provide information concerning the input of phytodetrital matter to the seafloor; phospholipids were analyzed to estimate the total biomass of small benthic organisms (including bacteria, fungi, flagellata, protozoa and small metazoan meiofauna). A new term `small size class biomass' (SSCB) is introduced for the biomass of the smallest size classes of sediment-inhabiting organisms; the reduction of fluorescein-di-acetate (FDA) was determined to evaluate the potential activity of ester-cleaving bacterial exoenzymes in the sediment samples. At all stations benthic biomass was predominantly composed of the small size spectrum (90% on the shelf; 97-98% in the bathyal and abyssal parts of the transect). Small size class biomass (integrated over a 10 cm sediment column) ranged from 8 g C m -2 on the shelf to 2.1 g C m -2 on the adjacent Porcupine Abyssal Plain, exponentially decreasing with increasing water depth. However, a correlation between water depth and SSCB, macrofauna biomass as well as metazoan meiofauna biomass exhibited a significantly flatter slope for the small size classes in comparison to the larger organisms. CPE values indicated a pronounced seasonal cycle on the shelf and upper slope with twin peaks of phytodetrital deposition in mid spring and late summer. The deeper stations seem to receive a single annual flux maximum in late summer. SSCB and heterotrophic activity are significantly correlated to the amount of sediment-bound pigments. Seasonality in pigment concentrations is clearly followed by SSCB and activity. In contrast to macro- and megafauna which integrate over larger periods (months/years), the small benthic size classes, namely bacteria and foraminifera, proved to be the most reactive potential of the benthic communities to any perturbations on short time scales (days/weeks). The small size classes, therefore, occupy a key role in early diagenetic processes.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
Meng, Dan; Falconer, James; Krauel-Goellner, Karen; Chen, John J J J; Farid, Mohammed; Alany, Raid G
2008-01-01
The purpose of this study was to design and build a supercritical CO(2) anti-solvent (SAS) unit and use it to produce microparticles of the class II drug carbamazepine. The operation conditions of the constructed unit affected the carbamazepine yield. Optimal conditions were: organic solution flow rate of 0.15 mL/min, CO(2) flow rate of 7.5 mL/min, pressure of 4,200 psi, over 3,000 s and at 33 degrees C. The drug solid-state characteristics, morphology and size distribution were examined before and after processing using X-ray powder diffraction and differential scanning calorimetry, scanning electron microscopy and laser diffraction particle size analysis, respectively. The in vitro dissolution of the treated particles was investigated and compared to that of untreated particles. Results revealed a change in the crystalline structure of carbamazepine with different polymorphs co-existing under various operation conditions. Scanning electron micrographs showed a change in the crystalline habit from the prismatic into bundled whiskers, fibers and filaments. The volume weighted diameter was reduced from 209 to 29 mum. Furthermore, the SAS CO(2) process yielded particles with significantly improved in vitro dissolution. Further research is needed to optimize the operation conditions of the self-built unit to maximize the production yield and produce a uniform polymorphic form of carbamazepine.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Nguyen, Huong Minh
2014-01-01
ABSTRACT Bacteriophage T7 terminator Tφ is a class I intrinsic terminator coding for an RNA hairpin structure immediately followed by oligo(U), which has been extensively studied in terms of its transcription termination mechanism, but little is known about its physiological or regulatory functions. In this study, using a T7 mutant phage, where a 31-bp segment of Tφ was deleted from the genome, we discovered that deletion of Tφ from T7 reduces the phage burst size but delays lysis timing, both of which are disadvantageous for the phage. The burst downsizing could directly result from Tφ deletion-caused upregulation of gene 17.5, coding for holin, among other Tφ downstream genes, because infection of gp17.5-overproducing Escherichia coli by wild-type T7 phage showed similar burst downsizing. However, the lysis delay was not associated with cellular levels of holin or lysozyme or with rates of phage adsorption. Instead, when allowed to evolve spontaneously in five independent adaptation experiments, the Tφ-lacking mutant phage, after 27 or 29 passages, recovered both burst size and lysis time reproducibly by deleting early genes 0.5, 0.6, and 0.7 of class I, among other mutations. Deletion of genes 0.5 to 0.7 from the Tφ-lacking mutant phage decreased expression of several Tφ downstream genes to levels similar to that of the wild-type phage. Accordingly, phage T7 lysis timing is associated with cellular levels of Tφ downstream gene products. This suggests the involvement of unknown factor(s) besides the known lysis proteins, lysozyme and holin, and that Tφ plays a role of optimizing burst size and lysis time during T7 infection. IMPORTANCE E. coli PMID:24335287
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spong, D.A.; Hirshman, S.P.; Whitson, J.C.
A new class of low aspect ratio toroidal hybrid stellarators is found using more general plasma confinement optimization criterion than quasi-symmetrization. The plasma current profile and shape of the outer magnetic flux surface are used as control variables to achieve near constancy of the longitudinal invariant J* on internal flux surfaces (quasi-omnigeneity), in addition to a number of other desirable physics target properties. We find that a range of compact (small aspect ratio A), high {beta} (ratio of thermal energy to magnetic field energy), low plasma current devices exist which have significantly improved confinement both for thermal as well asmore » energetic (collisionless) particle components. With reasonable increases in magnetic field and geometric size, such devices can also be scaled to confine 3.5 MeV alpha particle orbits.« less
Preparation and evaluation of cilnidipine microemulsion
Tandel, Hemal; Raval, Krunal; Nayani, Anil; Upadhay, Manish
2012-01-01
Cilnidipine, a calcium channel blocker having neuroprotective action and BCS Class II drug, hence formulating in Microemulsion will increase solubility, absorption and bioavailability. The formulation was prepared using titration method by tocotrienol, tween 20 and transcutol HP as oil, surfactant and co-surfactant and characterized for dilutability, dye solubility, assay (98.39±0.06), pH (6.6±1.5), Viscosity (98±1.0 cps) and Conductivity (0.2±0.09 μS/cm). The formulation was optimized on basis of percentage transmittance (99.269±0.23 at 700 nm), Globule size (13.31±4.3 nm) and zeta potential (–11.4±2.3 mV). Cilnidipine microemulsion was found to be stable for 3 months. PMID:23066184
Stochastic Analysis of Reaction–Diffusion Processes
Hu, Jifeng; Kang, Hye-Won
2013-01-01
Reaction and diffusion processes are used to model chemical and biological processes over a wide range of spatial and temporal scales. Several routes to the diffusion process at various levels of description in time and space are discussed and the master equation for spatially discretized systems involving reaction and diffusion is developed. We discuss an estimator for the appropriate compartment size for simulating reaction–diffusion systems and introduce a measure of fluctuations in a discretized system. We then describe a new computational algorithm for implementing a modified Gillespie method for compartmental systems in which reactions are aggregated into equivalence classes and computational cells are searched via an optimized tree structure. Finally, we discuss several examples that illustrate the issues that have to be addressed in general systems. PMID:23719732
Wang, Xiaoxue; Li, Xuyong
2017-01-01
Particle grain size is an important indicator for the variability in physical characteristics and pollutants composition of road-deposited sediments (RDS). Quantitative assessment of the grain-size variability in RDS amount, metal concentration, metal load and GSFLoad is essential to elimination of the uncertainty it causes in estimation of RDS emission load and formulation of control strategies. In this study, grain-size variability was explored and quantified using the coefficient of variation (Cv) of the particle size compositions, metal concentrations, metal loads, and GSFLoad values in RDS. Several trends in grain-size variability of RDS were identified: (i) the medium class (105–450 µm) variability in terms of particle size composition, metal loads, and GSFLoad values in RDS was smaller than the fine (<105 µm) and coarse (450–2000 µm) class; (ii) The grain-size variability in terms of metal concentrations increased as the particle size increased, while the metal concentrations decreased; (iii) When compared to the Lorenz coefficient (Lc), the Cv was similarly effective at describing the grain-size variability, whereas it is simpler to calculate because it did not require the data to be pre-processed. The results of this study will facilitate identification of the uncertainty in modelling RDS caused by grain-size class variability. PMID:28788078
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
A population model for a long-lived, resprouting chaparral shrub: Adenostoma fasciculatum
Stohlgren, Thomas J.; Rundel, Philip W.
1986-01-01
Extensive stands of Adenostoma fasciculatum H.&A. (chamise) in the chaparral of California are periodically rejuvenated by fire. A population model based on size-specific demographic characteristics (thinning and fire-caused mortality) was developed to generate probable age distributions within size classes and survivorship curves for typical stands. The model was modified to assess the long term effects of different mortality rates on age distributions. Under observed mean mortality rates (28.7%), model output suggests some shrubs can survive more than 23 fires. A 10% increase in mortality rate by size class slightly shortened the survivorship curve, while a 10% decrease in mortality rate by size class greatly elongated the curve. This approach may be applicable to other long-lived plant species with complex life histories.
The effect of sample size and disease prevalence on supervised machine learning of narrative data.
McKnight, Lawrence K.; Wilcox, Adam; Hripcsak, George
2002-01-01
This paper examines the independent effects of outcome prevalence and training sample sizes on inductive learning performance. We trained 3 inductive learning algorithms (MC4, IB, and Naïve-Bayes) on 60 simulated datasets of parsed radiology text reports labeled with 6 disease states. Data sets were constructed to define positive outcome states at 4 prevalence rates (1, 5, 10, 25, and 50%) in training set sizes of 200 and 2,000 cases. We found that the effect of outcome prevalence is significant when outcome classes drop below 10% of cases. The effect appeared independent of sample size, induction algorithm used, or class label. Work is needed to identify methods of improving classifier performance when output classes are rare. PMID:12463878
Test Operation Procedure (TOP) 01-1-010A Vehicle Test Course Severity (Surface Roughness)
2017-12-12
Department of Agriculture (USDA) classifications, respectively. TABLE 10. PARTICLE SIZE CLASSES CLASS SIZE Cobble and Gravel >4.75 mm particle diameter...ABBREVIATIONS. USCS Unified Soil Classification System USDA United States Department of Agriculture UTM Universal Transverse Mercator WNS wave number
Montane conifer fuel dynamics, Yosemite National Park
van Wagtendonk, J.W.; Moore, P.E.
1997-01-01
Litter and woody fuel accumulation rates over 7 years for 7 montane Sierra Nevada conifer species, including giant sequoia, ponderosa pine, sugar pine, Jeffrey pine, incense-cedar and white fir. Data are from four sites per size class per species with four size classes each. Nonspatial, georeferenced.
Effect Sizes in Three-Level Cluster-Randomized Experiments
ERIC Educational Resources Information Center
Hedges, Larry V.
2011-01-01
Research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Many of these designs involve two levels of clustering or nesting (students within classes and classes within schools). Researchers would like to compute effect size indexes based on the standardized mean difference to…
Louisiana forests: Status and outlook
Paul A. Murphy
1975-01-01
Between 1964 and 1974, forest area in Louisiana declined 9 percent to 14.5 million acres. Softwood volume increased 31 percent to 9 billion cubic feet, and hardwood declined 7 percent to 7.7 billion. All softwood size classes had increases in volume, and all hardwood size classes had decreases.
MARK PYRON; ALAN P. COVICH
2003-01-01
Snail size-frequency distributions in Rios Espiritu Santo and Mameyes, which drain the Luquillo Experimental Forest, Puerto Rico, showed that Neritina punctulata with shell lengths greater than 30 mm were the most abundant size class at upstream sites. The highest densities for all size classes were at the downstream sites. Growth rates were 0.015 mm/day for a large...
Effects of Nanoparticle Size on Multilayer Formation and Kinetics of Tethered Enzymes.
Lata, James P; Gao, Lizeng; Mukai, Chinatsu; Cohen, Roy; Nelson, Jacquelyn L; Anguish, Lynne; Coonrod, Scott; Travis, Alexander J
2015-09-16
Despite numerous applications, we lack fundamental understanding of how variables such as nanoparticle (NP) size influence the activity of tethered enzymes. Previously, we showed that biomimetic oriented immobilization yielded higher specific activities versus nonoriented adsorption or carboxyl-amine binding. Here, we standardize NP attachment strategy (oriented immobilization via hexahistidine tags) and composition (Ni-NTA coated gold NPs), to test the impact of NP size (⌀5, 10, 20, and 50 nm) on multilayer formation, activity, and kinetic parameters (kcat, KM, kcat/KM) of enzymes representing three different classes: glucose-6-phosphate isomerase (GPI), an isomerase; Glyceraldehyde-3-phosphate dehydrogenase S (GAPDHS), an oxidoreductase; and pyruvate kinase (PK), a transferase. Contrary to other reports, we observed no trend in kinetic parameters for individual enzymes when found in monolayers (<100% enzyme coverage), suggesting an advantage for oriented immobilization versus other attachment strategies. Saturating the NPs to maximize activity per NP resulted in enzyme multilayer formation. Under these conditions, total activity per NP increased with increasing NP size. Conversely, specific activity for all three enzymes was highest when tethered to the smallest NPs, retaining a remarkable 73-94% of the activity of free/untethered enzymes. Multilayer formations caused a clear trend of kcat decreasing with increasing NP size, yet negligible change in KM. Understanding the fundamental relationships between NP size and tethered enzyme activity enables optimized design of various applications, maximizing activity per NP or activity per enzyme molecule.
Capitalizing on Small Class Size. ERIC Digest Number 136.
ERIC Educational Resources Information Center
O'Connell, Jessica; Smith, Stuart C.
This Digest examines school districts' efforts to reap the greatest benefit from smaller classes. Although the report discusses teaching strategies that are most effective in small classes, research has shown that teachers do not significantly change their teaching practices when they move from larger to smaller classes. Smaller classes mean…
NASA Astrophysics Data System (ADS)
Shi, Zhongran; Wang, Ruizhen; Wang, Qingfeng; Su, Hang; Chai, Feng; Yang, Caifu
For the purpose of obtaining the optimal microstructures and mechanical properties of the CGHAZ under high input welding, continuous cooling transformation diagrams of the coarse grain heat-affected zone (CGHAZ) and the corresponding microstructures were investigated for a E36 class V-N-Ti, V-Ti, and Nb-Ti shipbuilding steels. The results indicated that the CGHAZ continuous transformation behaviors of Nb-Ti and V-Ti steel were similar, but the V-retard phenomenon was not as apparent as that of Nb. In addition, the cooling rate of ferrite transformation of V-Ti steel was higher than that of Nb-Ti steel. The nitrogen addition in the V-Ti steel enhanced the ferrite transformation, since that increasing the nitrogen could obtain fine (Ti, V)(C, N) particles and refine the original austenite size, which can promote the ferrite nucleation. The bainite transformation range of V-N-Ti steel was obviously lower than that of Nb-Ti, V-Ti steel at the t8/5≥100s.
Integration of Mirror Design with Suspension System Using NASA's New Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Bevan, Ryan M.; Stahl, H. Philip
2013-01-01
Advances in mirror fabrication are making very large space based telescopes possible. In many applications, only monolithic mirrors can meet the performance requirements. The existing and near-term planned heavy launch vehicles place a premium on lowest possible mass, and then available payload shroud sizes limit near term designs to 4 meter class mirrors. Practical 8 meter class and beyond designs could encourage planners to include larger shrouds, if it can be proven that such mirrors can be manufactured. These two factors, lower mass and larger mirrors, present the classic optimization problem. There is a practical upper limit to how large of a mirror can be supported by a purely kinematic mount system handling both operational and launch loads. This paper shows how the suspension system and mirror blank need to be designed simultaneously. We will also explore the concepts of auxiliary support systems which act only during launch and disengage on orbit. We will define required characteristics of these systems and show how they can substantially reduce the mirror mass.
Chen, Shun; Jin, Tuo
2016-01-01
Cationic polyimines polymerized through aromatically conjugated bis-imine linkages and intra-molecular cross-linking were found to be a new class of effective transfection materials for their flexibility in structural optimization, responsiveness to intracellular environment, the ability to facilitate endosome escape and cytosol release of the nucleic acids, as well as self-metabolism. When three phthalaldehydes of different substitution positions were used to polymerize highly branched low-molecular weight polyethylenimine (PEI 1.8K), the product through ortho-phthalimines (named PPOP) showed significantly higher transfection activity than its two tere- and iso-analogs (named PPTP and PPIP). Physicochemical characterization confirmed the similarity of three polyimines in pH-responded degradability, buffer capacity, as well as the size and Zeta potential of the polyplexes formed from the polymers. A mechanistic speculation may be that the ortho-positioned bis-imine linkage of PPOP may only lead to the straight trans-configuration due to steric hindrance, resulting in larger loops of intra-polymer cross-linking and more flexible backbone. PMID:26869931
Integration of Mirror Design with Suspension System using NASA's New Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold,William R., Sr.; Bevan, Ryan M.; Stahl, Philip
2013-01-01
Advances in mirror fabrication are making very large space based telescopes possible. In many applications, only monolithic mirrors can meet the performance requirements. The existing and near-term planned heavy launch vehicles place a premium on lowest possible mass, and then available payload shroud sizes limit near term designs to 4 meter class mirrors. Practical 8 meter class and beyond designs could encourage planners to include larger shrouds, if it can be proven that such mirrors can be manufactured. These two factors, lower mass and larger mirrors, present the classic optimization problem. There is a practical upper limit to how large of a mirror can be supported by a purely kinematic mount system handling both operational and launch loads. This paper shows how the suspension system and mirror blank need to be designed simultaneously. We will also explore the concepts of auxiliary support systems which act only during launch and disengage on orbit. We will define required characteristics of these systems and show how they can substantially reduce the mirror mass.
Li, Baozhen; Ge, Tida; Xiao, Heai; Zhu, Zhenke; Li, Yong; Shibistova, Olga; Liu, Shoulong; Wu, Jinshui; Inubushi, Kazuyuki; Guggenberger, Georg
2016-04-01
Red soils are the major land resource in subtropical and tropical areas and are characterized by low phosphorus (P) availability. To assess the availability of P for plants and the potential stability of P in soil, two pairs of subtropical red soil samples from a paddy field and an adjacent uncultivated upland were collected from Hunan Province, China. Analysis of total P and Olsen P and sequential extraction was used to determine the inorganic and organic P fractions in different aggregate size classes. Our results showed that the soil under paddy cultivation had lower proportions of small aggregates and higher proportions of large aggregates than those from the uncultivated upland soil. The portion of >2-mm-sized aggregates increased by 31 and 20 % at Taoyuan and Guiyang, respectively. The total P and Olsen P contents were 50-150 and 50-300 % higher, respectively, in the paddy soil than those in the upland soil. Higher inorganic and organic P fractions tended to be enriched in both the smallest and largest aggregate size classes compared to the middle size class (0.02-0.2 mm). Furthermore, the proportion of P fractions was higher in smaller aggregate sizes (<2 mm) than in the higher aggregate sizes (>2 mm). In conclusion, soils under paddy cultivation displayed improved soil aggregate structure, altered distribution patterns of P fractions in different aggregate size classes, and to some extent had enhanced labile P pools.
Size-exclusion chromatography of perfluorosulfonated ionomers.
Mourey, T H; Slater, L A; Galipo, R C; Koestner, R J
2011-08-26
A size-exclusion chromatography (SEC) method in N,N-dimethylformamide containing 0.1 M LiNO(3) is shown to be suitable for the determination of molar mass distributions of three classes of perfluorosulfonated ionomers, including Nafion(®). Autoclaving sample preparation is optimized to prepare molecular solutions free of aggregates, and a solvent exchange method concentrates the autoclaved samples to enable the use of molar-mass-sensitive detection. Calibration curves obtained from light scattering and viscometry detection suggest minor variation in the specific refractive index increment across the molecular size distributions, which introduces inaccuracies in the calculation of local absolute molar masses and intrinsic viscosities. Conformation plots that combine apparent molar masses from light scattering detection with apparent intrinsic viscosities from viscometry detection partially compensate for the variations in refractive index increment. The conformation plots are consistent with compact polymer conformations, and they provide Mark-Houwink-Sakurada constants that can be used to calculate molar mass distributions without molar-mass-sensitive detection. Unperturbed dimensions and characteristic ratios calculated from viscosity-molar mass relationships indicate unusually free rotation of the perfluoroalkane backbones and may suggest limitations to applying two-parameter excluded volume theories for these ionomers. Copyright © 2011 Elsevier B.V. All rights reserved.
Spectroscopy of metal "superatom" nanoclusters and high-Tc superconducting pairing
NASA Astrophysics Data System (ADS)
Halder, Avik; Kresin, Vitaly V.
2015-12-01
A unique property of metal nanoclusters is the "superatom" shell structure of their delocalized electrons. The electronic shell levels are highly degenerate and therefore represent sharp peaks in the density of states. This can enable exceptionally strong electron pairing in certain clusters composed of tens to hundreds of atoms. In a finite system, such as a free nanocluster or a nucleus, pairing is observed most clearly via its effect on the energy spectrum of the constituent fermions. Accordingly, we performed a photoionization spectroscopy study of size-resolved aluminum nanoclusters and observed a rapid rise in the near-threshold density of states of several clusters (A l37 ,44 ,66 ,68 ) with decreasing temperature. The characteristics of this behavior are consistent with compression of the density of states by a pairing transition into a high-temperature superconducting state with Tc≳100 K. This value exceeds that of bulk aluminum by two orders of magnitude. These results highlight the potential of novel pairing effects in size-quantized systems and the possibility to attain even higher critical temperatures by optimizing the particles' size and composition. As a new class of high-temperature superconductors, such metal nanocluster particles are promising building blocks for high-Tc materials, devices, and networks.
Boehm, Julia K.; Chen, Ying; Williams, David R.; Ryff, Carol; Kubzansky, Laura D.
2015-01-01
Socioeconomic status is associated with health disparities, but underlying psychosocial mechanisms have not been fully identified. Dispositional optimism may be a psychosocial process linking socioeconomic status with health. We hypothesized that lower optimism would be associated with greater social disadvantage and poorer social mobility. We also investigated whether life satisfaction and positive affect showed similar patterns. Participants from the Midlife in the United States study self-reported their optimism, satisfaction, positive affect, and socioeconomic status (gender, race/ethnicity, education, occupational class and prestige, income). Social disparities in optimism were evident. Optimistic individuals tended to be white and highly educated, had an educated parent, belonged to higher occupational classes with more prestige, and had higher incomes. Findings were generally similar for satisfaction, but not positive affect. Greater optimism and satisfaction were also associated with educational achievement across generations. Optimism and life satisfaction are consistently linked with socioeconomic advantage and may be one conduit by which social disparities influence health. PMID:25671665
Boehm, Julia K; Chen, Ying; Williams, David R; Ryff, Carol; Kubzansky, Laura D
2015-01-01
Socioeconomic status is associated with health disparities, but underlying psychosocial mechanisms have not been fully identified. Dispositional optimism may be a psychosocial process linking socioeconomic status with health. We hypothesized that lower optimism would be associated with greater social disadvantage and poorer social mobility. We also investigated whether life satisfaction and positive affect showed similar patterns. Participants from the Midlife in the United States study self-reported their optimism, satisfaction, positive affect, and socioeconomic status (gender, race/ethnicity, education, occupational class and prestige, income). Social disparities in optimism were evident. Optimistic individuals tended to be white and highly educated, had an educated parent, belonged to higher occupational classes with more prestige, and had higher incomes. Findings were generally similar for satisfaction, but not positive affect. Greater optimism and satisfaction were also associated with educational achievement across generations. Optimism and life satisfaction are consistently linked with socioeconomic advantage and may be one conduit by which social disparities influence health.
Lu, Yuan; Klimovich, Charlotte M; Robeson, Kalen Z; Boswell, William; Ríos-Cardenas, Oscar; Walter, Ronald B; Morris, Molly R
2017-01-01
Nutritional programming takes place in early development. Variation in the quality and/or quantity of nutrients in early development can influence long-term health and viability. However, little is known about the mechanisms of nutritional programming. The live-bearing fish Xiphophorus multilineatus has the potential to be a new model for understanding these mechanisms, given prior evidence of nutritional programming influencing behavior and juvenile growth rate. We tested the hypotheses that nutritional programming would influence behaviors involved in energy homeostasis as well gene expression in X. multilineatus. We first examined the influence of both juvenile environment (varied in nutrition and density) and adult environment (varied in nutrition) on behaviors involved in energy acquisition and energy expenditure in adult male X. multilineatus . We also compared the behavioral responses across the genetically influenced size classes of males. Males stop growing at sexual maturity, and the size classes of can be identified based on phenotypes (adult size and pigment patterns). To study the molecular signatures of nutritional programming, we assembled a de novo transcriptome for X. multilineatus using RNA from brain, liver, skin, testis and gonad tissues, and used RNA-Seq to profile gene expression in the brains of males reared in low quality (reduced food, increased density) and high quality (increased food, decreased density) juvenile environments. We found that both the juvenile and adult environments influenced the energy intake behavior, while only the adult environment influenced energy expenditure. In addition, there were significant interactions between the genetically influenced size classes and the environments that influenced energy intake and energy expenditure, with males from one of the four size classes (Y-II) responding in the opposite direction as compared to the other males examined. When we compared the brains of males of the Y-II size class reared in a low quality juvenile environment to males from the same size class reared in high quality juvenile environment, 131 genes were differentially expressed, including metabolism and appetite master regulator agrp gene. Our study provides evidence for nutritional programming in X. multilineatus , with variation across size classes of males in how juvenile environment and adult diet influences behaviors involved in energy homeostasis. In addition, we provide the first transcriptome of X. multilineatus , and identify a group of candidate genes involved in nutritional programming.
Lee, Sungwook; Park, Boyoun; Kang, Kwonyoon
2009-01-01
In contrast to the fairly well-characterized mechanism of assembly of MHC class I-peptide complexes, the disassembly mechanism by which peptide-loaded MHC class I molecules are released from the peptide-loading complex and exit the endoplasmic reticulum (ER) is poorly understood. Optimal peptide binding by MHC class I molecules is assumed to be sufficient for triggering exit of peptide-filled MHC class I molecules from the ER. We now show that protein disulfide isomerase (PDI) controls MHC class I disassembly by regulating dissociation of the tapasin-ERp57 disulfide conjugate. PDI acts as a peptide-dependent molecular switch; in the peptide-bound state, it binds to tapasin and ERp57 and induces dissociation of the tapasin-ERp57 conjugate. In the peptide-free state, PDI is incompetent to bind to tapasin or ERp57 and fails to dissociate the tapasin-ERp57 conjugates, resulting in ER retention of MHC class I molecules. Thus, our results indicate that even after optimal peptide loading, MHC class I disassembly does not occur by default but, rather, is a regulated process involving PDI-mediated interactions within the peptide-loading complex. PMID:19477919
Design of partially supervised classifiers for multispectral image data
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David
1993-01-01
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
Sella size and jaw bases - Is there a correlation???
Neha; Mogra, Subraya; Shetty, Vorvady Surendra; Shetty, Siddarth
2016-01-01
Sella turcica is an important cephalometric structure and attempts have been made in the past to correlate its dimensions to the malocclusion. However, no study has so far compared the size of sella to the jaw bases that determine the type of malocclusion. The present study was undertaken to find out any such correlation if it exists. Lateral cephalograms of 110 adults consisting of 40 Class I, 40 Class II, and 30 Class III patients were assessed for the measurement of sella length, width, height, and area. The maxillary length, mandibular ramus height, and body length were also measured. The sella dimensions were compared among three malocclusion types by one-way ANOVA. Pearson correlation was calculated between the jaw size and sella dimensions. Furthermore, the ratio of jaw base lengths and sella area were calculated. Mean sella length, width and area were found to be greatest in Class III, followed by Class I and least in Class II though the results were not statistically significant. 3 out of 4 measured dimensions of sella, correlated significantly with mandibular ramus and body length each. However, only one dimension of sella showed significant correlation with maxilla. The mandibular ramus and body length show a nearly constant ratio to sella area (0.83-0.85, 0.64-0.65, respectively) in all the three malocclusions. Thus, mandible has a definite and better correlation to the size of sella turcica.
California's Class Size Reduction: Implications for Equity, Practice & Implementation.
ERIC Educational Resources Information Center
Wexler, Edward; Izu, JoAnn; Carlos, Lisa; Fuller, Bruce; Hayward, Gerald; Kirst, Mike
When California implemented its class-size reduction (CSR) program in 1996, a number of questions regarding financial burdens, teacher shortages, scarcity of facilities, and collective bargaining were raised. This first-year implementation study aims to provide some contextual information as background for answering questions, to clarify these…
Effects of Class Size on Alternative Educational Outcomes across Disciplines
ERIC Educational Resources Information Center
Cheng, Dorothy A.
2011-01-01
This is the first study to use self-reported ratings of student learning, instructor recommendations, and course recommendations as the outcome measure to estimate class size effects, doing so across 24 disciplines. Fixed-effects models controlling for heterogeneous courses and instructors reveal that increasing enrollment has negative and…
78 FR 42817 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-17
... SMALL BUSINESS ADMINISTRATION Small Business Size Standards: Waiver of the Nonmanufacturer Rule AGENCY: U.S. Small Business Administration. ACTION: Notice of intent to rescind the class waiver of the... Manufacturing. SUMMARY: The U. S. Small Business Administration (SBA) intends to rescind a class waiver of the...
Financing Class Size Reduction
ERIC Educational Resources Information Center
Achilles, C. M.
2005-01-01
Class size reduction has been shown to, among other things, improve academic achievement for all students and particularly for low-income and minority students. With the No Child Left Behind Act's heavy emphasis on scientifically based research, adequate yearly progress, and disaggregated results, one wonders why all children aren't enrolled in…
ERIC Educational Resources Information Center
McCluskey, Neal
"Smaller is better" is often the mantra of school leaders with regard to class size, while the benefits of smaller schools are ignored. Benefits of small classes seem obvious--teachers with fewer students could devote more time to each student. Conducted in 1985-89, Tennessee's Project STAR (Student/Teacher Achievement Ratio) found that…
Influences of Teaching Approaches and Class Size on Undergraduate Mathematical Learning
ERIC Educational Resources Information Center
Olson, Jo Clay; Cooper, Sandy; Lougheed, Tom
2011-01-01
An issue for many mathematics departments is the success rate of precalculus students. In an effort to increase the success rate, this quantitative study investigated how class size and teaching approach influenced student achievement and students' attitudes towards learning mathematics. Students' achievement and their attitudes toward learning…
Educational Production and Teacher Preferences
ERIC Educational Resources Information Center
Bosworth, Ryan; Caliendo, Frank
2007-01-01
We develop a simple model of teacher behavior that offers a solution to the ''class size puzzle'' and is useful for analyzing the potential effects of the No Child Left Behind Act. When teachers must allocate limited classroom time between multiple instructional methods, rational teachers may respond to reductions in class size by reallocating…
Continuous-variable quantum teleportation with non-Gaussian resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Anno, F.; Dipartimento di Fisica, Universita degli Studi di Salerno, Via S. Allende, I-84081 Baronissi; CNR-INFM Coherentia, Napoli, Italy and CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo Collegato di Salerno, Baronissi
2007-08-15
We investigate continuous variable quantum teleportation using non-Gaussian states of the radiation field as entangled resources. We compare the performance of different classes of degaussified resources, including two-mode photon-added and two-mode photon-subtracted squeezed states. We then introduce a class of two-mode squeezed Bell-like states with one-parameter dependence for optimization. These states interpolate between and include as subcases different classes of degaussified resources. We show that optimized squeezed Bell-like resources yield a remarkable improvement in the fidelity of teleportation both for coherent and nonclassical input states. The investigation reveals that the optimal non-Gaussian resources for continuous variable teleportation are those thatmore » most closely realize the simultaneous maximization of the content of entanglement, the degree of affinity with the two-mode squeezed vacuum, and the, suitably measured, amount of non-Gaussianity.« less
Preferential attachment and growth dynamics in complex systems
NASA Astrophysics Data System (ADS)
Yamasaki, Kazuko; Matia, Kaushik; Buldyrev, Sergey V.; Fu, Dongfeng; Pammolli, Fabio; Riccaboni, Massimo; Stanley, H. Eugene
2006-09-01
Complex systems can be characterized by classes of equivalency of their elements defined according to system specific rules. We propose a generalized preferential attachment model to describe the class size distribution. The model postulates preferential growth of the existing classes and the steady influx of new classes. According to the model, the distribution changes from a pure exponential form for zero influx of new classes to a power law with an exponential cut-off form when the influx of new classes is substantial. Predictions of the model are tested through the analysis of a unique industrial database, which covers both elementary units (products) and classes (markets, firms) in a given industry (pharmaceuticals), covering the entire size distribution. The model’s predictions are in good agreement with the data. The paper sheds light on the emergence of the exponent τ≈2 observed as a universal feature of many biological, social and economic problems.
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.
[Change in short-term memory in pupils of 5-7th classes in the process of class work].
Rybakov, V P; Orlova, N I
2014-01-01
The subject of this study was the investigation of the short-term memory (STM) of visual (SVM) and auditory (SAM) modality in boys and girls of the middle school age, as in the daytime, and during the course of the school week. The obtained data show that in pupils from the 5th to the 7th class SVM and SAM playback volume in children of both genders is significantly increased, while SVM productivity in boys from 6 - 7th classes is higher than in girls of the same age. The amplitude of day changes in SVM and SAM was found to decrease significantly with the age. In all age groups the range of daily fluctuations in short-term memory of both modalities in boys appears to be higher than in girls. In all age groups a significant part of schoolchildren was revealed to possess optimal forms of temporal organization of short-term memory: morning, day and morning-day types, in that while during the school week in pupils of 5th to 7th classes of both genders the number of optimal waveforms of curves of daily dynamics of short-term memory increases, which contributes to the optimization of their mental performance.
Tan, Qunyou; Jiang, Rong; Xu, Meiling; Liu, Guodong; Li, Songlin; Zhang, Jingqing
2013-01-01
Background Pyridostigmine bromide (3-[[(dimethylamino)-carbonyl]oxy]-1-methylpyridinium bromide), a reversible inhibitor of cholinesterase, is given orally in tablet form, and a treatment schedule of multiple daily doses is recommended for adult patients. Nanotechnology was used in this study to develop an alternative sustained-release delivery system for pyridostigmine, a synthetic drug with high solubility and poor oral bioavailability, hence a Class III drug according to the Biopharmaceutics Classification System. Novel nanosized pyridostigmine-poly(lactic acid) microcapsules (PPNMCs) were expected to have a longer duration of action than free pyridostigmine and previously reported sustained-release formulations of pyridostigmine. Methods The PPNMCs were prepared using a double emulsion-solvent evaporation method to achieve sustained-release characteristics for pyridostigmine. The preparation process for the PPNMCs was optimized by single-factor experiments. The size distribution, zeta potential, and sustained-release behavior were evaluated in different types of release medium. Results The optimal volume ratio of inner phase to external phase, poly(lactic acid) concentration, polyvinyl alcohol concentration, and amount of pyridostigmine were 1:10, 6%, 3% and 40 mg, respectively. The negatively charged PPNMCs had an average particle size of 937.9 nm. Compared with free pyridostigmine, PPNMCs showed an initial burst release and a subsequent very slow release in vitro. The release profiles for the PPNMCs in four different types of dissolution medium were fitted to the Ritger-Peppas and Weibull models. The similarity between pairs of dissolution profiles for the PPNMCs in different types of medium was statistically significant, and the difference between the release curves for PPNMCs and free pyridostigmine was also statistically significant. Conclusion PPNMCs prepared by the optimized protocol described here were in the nanometer range and had good uniformity, with significantly slower pyridostigmine release than from free pyridostigmine. This novel sustained-release delivery nanosystem for pyridostigmine might alleviate the need to identify new acetylcholinesterase inhibitors. PMID:23459707
Tan, Qunyou; Jiang, Rong; Xu, Meiling; Liu, Guodong; Li, Songlin; Zhang, Jingqing
2013-01-01
Pyridostigmine bromide (3-[[(dimethylamino)-carbonyl]oxy]-1-methylpyridinium bromide), a reversible inhibitor of cholinesterase, is given orally in tablet form, and a treatment schedule of multiple daily doses is recommended for adult patients. Nanotechnology was used in this study to develop an alternative sustained-release delivery system for pyridostigmine, a synthetic drug with high solubility and poor oral bioavailability, hence a Class III drug according to the Biopharmaceutics Classification System. Novel nanosized pyridostigmine-poly(lactic acid) microcapsules (PPNMCs) were expected to have a longer duration of action than free pyridostigmine and previously reported sustained-release formulations of pyridostigmine. The PPNMCs were prepared using a double emulsion-solvent evaporation method to achieve sustained-release characteristics for pyridostigmine. The preparation process for the PPNMCs was optimized by single-factor experiments. The size distribution, zeta potential, and sustained-release behavior were evaluated in different types of release medium. The optimal volume ratio of inner phase to external phase, poly(lactic acid) concentration, polyvinyl alcohol concentration, and amount of pyridostigmine were 1:10, 6%, 3% and 40 mg, respectively. The negatively charged PPNMCs had an average particle size of 937.9 nm. Compared with free pyridostigmine, PPNMCs showed an initial burst release and a subsequent very slow release in vitro. The release profiles for the PPNMCs in four different types of dissolution medium were fitted to the Ritger-Peppas and Weibull models. The similarity between pairs of dissolution profiles for the PPNMCs in different types of medium was statistically significant, and the difference between the release curves for PPNMCs and free pyridostigmine was also statistically significant. PPNMCs prepared by the optimized protocol described here were in the nanometer range and had good uniformity, with significantly slower pyridostigmine release than from free pyridostigmine. This novel sustained-release delivery nanosystem for pyridostigmine might alleviate the need to identify new acetylcholinesterase inhibitors.
Development of Microemulsion Based Nabumetone Transdermal Delivery For Treatment of Arthritis.
Jagdale, Swati; Deore, Gokul; Chabukswar, Anuruddha
2018-02-26
Background Nabumetone is biopharmaceutics classification system (BCS) class II drug, widely used in the treatment of osteoarthritis and rheumatoid arthritis. The most frequently reported adverse reactions for the drug involve disturbance in gastrointestinal tract , diarrhea, dyspepsia and abdominal pain. Microemulgel has advantages of microemulsion for improving solubility for hydrophobic drug. Patent literature had shown that the work for drug has been carried on spray chilling, enteric coated tablet, and topical formulation which gave idea for present research work for development of transdermal delivery. Objective Objective of the present research work was to optimize transdermal microemulgel delivery for Nabumetone for treatment of arthritis. Method Oil, surfactant and co-surfactant were selected based on solubility study for the drug. Gelling agents used were Carbopol 934 and HPMC K100M. Optimization was carried out using 32 factorial design. Characterization and evaluation were carried out for microemulsion and microemulsion based gel. Results Field emission-scanning electron microscopy (FE-SEM) study of the microemulsion revealed globules of 50-200 nm size . Zeta potential -9.50 mV indicated good stability of microemulsion. Globule size measured by dynamic light scattering (zetasizer) was 160 nm. Design expert gave optimized batch as F7 which contain 0.2% w/w drug, 4.3% w/w liquid paraffin, 0.71% w/w tween 80, 0.35% w/w propylene glycol, 0.124% w/w Carbopol 934, 0.187% w/w HPMC K100M and 11.68% w/w water. In-vitro diffusion study for F7 batch showed 99.16±2.10 % drug release through egg membrane and 99.15±2.73% drug release in ex-vivo study. Conclusion Nabumetone microemulgel exhibiting good in-vitro and ex-vivo controlled drug release was optimized. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Kim, Eun Sook; Wang, Yan
2017-01-01
Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691
Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud
2018-01-01
Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919
Optimal design of photoreceptor mosaics: why we do not see color at night.
Manning, Jeremy R; Brainard, David H
2009-01-01
While color vision mediated by rod photoreceptors in dim light is possible (Kelber & Roth, 2006), most animals, including humans, do not see in color at night. This is because their retinas contain only a single class of rod photoreceptors. Many of these same animals have daylight color vision, mediated by multiple classes of cone photoreceptors. We develop a general formulation, based on Bayesian decision theory, to evaluate the efficacy of various retinal photoreceptor mosaics. The formulation evaluates each mosaic under the assumption that its output is processed to optimally estimate the image. It also explicitly takes into account the statistics of the environmental image ensemble. Using the general formulation, we consider the trade-off between monochromatic and dichromatic retinal designs as a function of overall illuminant intensity. We are able to demonstrate a set of assumptions under which the prevalent biological pattern represents optimal processing. These assumptions include an image ensemble characterized by high correlations between image intensities at nearby locations, as well as high correlations between intensities in different wavelength bands. They also include a constraint on receptor photopigment biophysics and/or the information carried by different wavelengths that produces an asymmetry in the signal-to-noise ratio of the output of different receptor classes. Our results thus provide an optimality explanation for the evolution of color vision for daylight conditions and monochromatic vision for nighttime conditions. An additional result from our calculations is that regular spatial interleaving of two receptor classes in a dichromatic retina yields performance superior to that of a retina where receptors of the same class are clumped together.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.
Optimal Decision Making in a Class of Uncertain Systems Based on Uncertain Variables
NASA Astrophysics Data System (ADS)
Bubnicki, Z.
2006-06-01
The paper is concerned with a class of uncertain systems described by relational knowledge representations with unknown parameters which are assumed to be values of uncertain variables characterized by a user in the form of certainty distributions. The first part presents the basic optimization problem consisting in finding the decision maximizing the certainty index that the requirement given by a user is satisfied. The main part is devoted to the description of the optimization problem with the given certainty threshold. It is shown how the approach presented in the paper may be applied to some problems for anticipatory systems.
NASA Technical Reports Server (NTRS)
Markopoulos, N.; Calise, A. J.
1993-01-01
The class of all piecewise time-continuous controllers tracking a given hypersurface in the state space of a dynamical system can be split by the present transformation technique into two disjoint classes; while the first of these contains all controllers which track the hypersurface in finite time, the second contains all controllers that track the hypersurface asymptotically. On this basis, a reformulation is presented for optimal control problems involving state-variable inequality constraints. If the state constraint is regarded as 'soft', there may exist controllers which are asymptotic, two-sided, and able to yield the optimal value of the performance index.
Engineering two-wire optical antennas for near field enhancement
NASA Astrophysics Data System (ADS)
Yang, Zhong-Jian; Zhao, Qian; Xiao, Si; He, Jun
2017-07-01
We study the optimization of near field enhancement in the two-wire optical antenna system. By varying the nanowire sizes we obtain the optimized side-length (width and height) for the maximum field enhancement with a given gap size. The optimized side-length applies to a broadband range (λ = 650-1000 nm). The ratio of extinction cross section to field concentration size is found to be closely related to the field enhancement behavior. We also investigate two experimentally feasible cases which are antennas on glass substrate and mirror, and find that the optimized side-length also applies to these systems. It is also found that the optimized side-length shows a tendency of increasing with the gap size. Our results could find applications in field-enhanced spectroscopies.
Morley, N J; Adam, M E; Lewis, J W
2010-09-01
The production of cercariae from their snail host is a fundamental component of transmission success in trematodes. The emergence of Echinoparyphium recurvatum (Trematoda: Echinostomatidae) cercariae from Lymnaea peregra was studied under natural sunlight conditions, using naturally infected snails of different sizes (10-17 mm) within a temperature range of 10-29 degrees C. There was a single photoperiodic circadian cycle of emergence with one peak, which correlated with the maximum diffuse sunlight irradiation. At 21 degrees C the daily number of emerging cercariae increased with increasing host snail size, but variations in cercarial emergence did occur between both individual snails and different days. There was only limited evidence of cyclic emergence patterns over a 3-week period, probably due to extensive snail mortality, particularly those in the larger size classes. Very few cercariae emerged in all snail size classes at the lowest temperature studied (10 degrees C), but at increasingly higher temperatures elevated numbers of cercariae emerged, reaching an optimum between 17 and 25 degrees C. Above this range emergence was reduced. At all temperatures more cercariae emerged from larger snails. Analysis of emergence using the Q10 value, a measure of physiological processes over temperature ranges, showed that between 10 and 21 degrees C (approximately 15 degrees C) Q10 values exceeded 100 for all snail size classes, indicating a substantially greater emergence than would be expected for normal physiological rates. From 14 to 25 degrees C (approximately 20 degrees C) cercarial emergence in most snail size classes showed little change in Q10, although in the smallest size class emergence was still substantially greater than the typical Q10 increase expected over this temperature range. At the highest range of 21-29 degrees C (approximately 25 degrees C), Q10 was much reduced. The importance of these results for cercarial emergence under global climate change is discussed.
Reducing Class Size in New York City: Promise vs. Practice
ERIC Educational Resources Information Center
Farrie, Danielle; Johnson, Monete; Lecker, Wendy; Luhm, Theresa
2016-01-01
In the landmark school funding litigation, "Campaign for Fiscal Equity v. State" ("CFE"), the highest Court in New York recognized that reasonable class sizes are an essential element of a constitutional "sound basic education." In response to the rulings in the case, in 2007, the Legislature adopted a law mandating…
What Have Researchers Learned from Project STAR?
ERIC Educational Resources Information Center
Schanzenbach, Diane Whitmore
2007-01-01
Project STAR (Student/Teacher Achievement Ratio) was a large-scale randomized trial of reduced class sizes in kindergarten through the third grade. Because of the scope of the experiment, it has been used in many policy discussions. For example, the California statewide class-size-reduction policy was justified, in part, by the successes of…
Class Size Reduction or Rapid Formative Assessment?: A Comparison of Cost-Effectiveness
ERIC Educational Resources Information Center
Yeh, Stuart S.
2009-01-01
The cost-effectiveness of class size reduction (CSR) was compared with the cost-effectiveness of rapid formative assessment, a promising alternative for raising student achievement. Drawing upon existing meta-analyses of the effects of student-teacher ratio, evaluations of CSR in Tennessee, California, and Wisconsin, and RAND cost estimates, CSR…
ERIC Educational Resources Information Center
Munoz, Marco A.
This study evaluated the Class Size Reduction (CSR) program in 34 elementary schools in Kentucky's Jefferson County Public Schools. The CSR program is a federal initiative to help elementary schools improve student learning by hiring additional teachers. Qualitative data were collected using unstructured interviews, site observations, and document…
Reducing Class Size: What Do We Know?
ERIC Educational Resources Information Center
Bascia, Nina
2010-01-01
This report provides an overview of findings from the research on primary class size reduction as a strategy to improve student learning. Its purpose is to provide a comprehensive and balanced picture of a very popular educational reform strategy that has often been seen as a "quick fix" for improving students' opportunities to learn in…
Class Size Reduction: Great Hopes, Great Challenges. Policy Brief.
ERIC Educational Resources Information Center
WestEd, San Francisco, CA.
This policy brief examines the benefits and the challenges that accompany class-size reduction (CSR). It suggests that when designing CSR programs, states should carefully assess specific circumstances in their schools as they adopt or modify CSR efforts to avoid the unintended consequences that some programs have experienced. Some of the…
The Effects of Class Size on Students' Academic Achievement
ERIC Educational Resources Information Center
Wilson, Claire
2011-01-01
The purpose of this quantitative correlational research was to study the relationship between class size and students' academic achievement. Citywide language arts and math test scores for third and fifth grade students in four New York City public schools were examined using a variety of variables including (a) gender, (b) ethnicity, (c) grade…
ERIC Educational Resources Information Center
Smith, Mary Lee; Glass, Gene V.
Using data from previously completed research, the authors of this report attempted to examine the relationship between class size and measures of outcomes such as student attitudes and behavior, classroom processes and learning environment, and teacher satisfaction. The authors report that statistical integration of the existing research…
Smart Class-Size Policies for Lean Times. SREB Policy Brief
ERIC Educational Resources Information Center
Gagne, Jeff
2012-01-01
Most states nationwide have had policies for several decades that limit the number of students assigned to public K-12 classrooms. Southern Regional Education Board (SREB) states, led by Tennessee and Texas, spearheaded this effort in the 1980s, and SREB's own "Legislative Briefings" have marked the growth of class-size policies across…
Forest fragmentation of southern U.S. bottomland hardwoods
Victor A. Rudis
1993-01-01
The magnitude and character of forest fragmentation are evaluated for bottomland hardwoods in the southern United States.Fragment size class is significantly associated with the frequency of bottomland hardwood species, stand size and ownership classes, and land use attributes.Differences in the frequency of indicators of multiple values are apparent. Two diverse...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Size classes and associated liability... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR... privity and knowledge of the owner or operator, the following limits of liability are established for...
Two Universality Classes for the Many-Body Localization Transition
NASA Astrophysics Data System (ADS)
Khemani, Vedika; Sheng, D. N.; Huse, David A.
2017-08-01
We provide a systematic comparison of the many-body localization (MBL) transition in spin chains with nonrandom quasiperiodic versus random fields. We find evidence suggesting that these belong to two separate universality classes: the first dominated by "intrinsic" intrasample randomness, and the second dominated by external intersample quenched randomness. We show that the effects of intersample quenched randomness are strongly growing, but not yet dominant, at the system sizes probed by exact-diagonalization studies on random models. Thus, the observed finite-size critical scaling collapses in such studies appear to be in a preasymptotic regime near the nonrandom universality class, but showing signs of the initial crossover towards the external-randomness-dominated universality class. Our results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions. We also show that the MBL phase is more stable for the quasiperiodic model as compared to the random one, and the transition in the quasiperiodic model suffers less from certain finite-size effects.
Collignon, Amandine; Hecq, Jean-Henri; Galgani, François; Collard, France; Goffart, Anne
2014-02-15
The annual variation in neustonic plastic particles and zooplankton was studied in the Bay of Calvi (Corsica) between 30 August 2011 and 7 August 2012. Plastic particles were classified into three size classes, small microplastics (0.2-2mm), large microplastics (2-5mm) and mesoplastics (5-10mm). 74% of the 38 samples contained plastic particles of varying composition: e.g. filaments, polystyrene, thin plastic films. An average concentration of 6.2 particles/100 m(2) was observed. The highest abundance values (69 particles/100 m(2)) observed occurred during periods of low offshore wind conditions. These values rose in the same order of magnitude as in previous studies in the North Western Mediterranean. The relationships between the abundance values of the size classes between zooplankton and plastic particles were then examined. The ratio for the intermediate size class (2-5mm) reached 2.73. This would suggest a potential confusion for predators regarding planktonic prey of this size class. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Three tiers of genome evolution in reptiles
Organ, Chris L.; Moreno, Ricardo Godínez; Edwards, Scott V.
2008-01-01
Characterization of reptilian genomes is essential for understanding the overall diversity and evolution of amniote genomes, because reptiles, which include birds, constitute a major fraction of the amniote evolutionary tree. To better understand the evolution and diversity of genomic characteristics in Reptilia, we conducted comparative analyses of online sequence data from Alligator mississippiensis (alligator) and Sphenodon punctatus (tuatara) as well as genome size and karyological data from a wide range of reptilian species. At the whole-genome and chromosomal tiers of organization, we find that reptilian genome size distribution is consistent with a model of continuous gradual evolution while genomic compartmentalization, as manifested in the number of microchromosomes and macrochromosomes, appears to have undergone early rapid change. At the sequence level, the third genomic tier, we find that exon size in Alligator is distributed in a pattern matching that of exons in Gallus (chicken), especially in the 101—200 bp size class. A small spike in the fraction of exons in the 301 bp—1 kb size class is also observed for Alligator, but more so for Sphenodon. For introns, we find that members of Reptilia have a larger fraction of introns within the 101 bp–2 kb size class and a lower fraction of introns within the 5–30 kb size class than do mammals. These findings suggest that the mode of reptilian genome evolution varies across three hierarchical levels of the genome, a pattern consistent with a mosaic model of genomic evolution. PMID:21669810
Dust control effectiveness of drywall sanding tools.
Young-Corbett, Deborah E; Nussbaum, Maury A
2009-07-01
In this laboratory study, four drywall sanding tools were evaluated in terms of dust generation rates in the respirable and thoracic size classes. In a repeated measures study design, 16 participants performed simulated drywall finishing tasks with each of four tools: (1) ventilated sander, (2) pole sander, (3) block sander, and (4) wet sponge. Dependent variables of interest were thoracic and respirable breathing zone dust concentrations. Analysis by Friedman's Test revealed that the ventilated drywall sanding tool produced significantly less dust, of both size classes, than did the other three tools. The pole and wet sanders produced significantly less dust of both size classes than did the block sander. The block sander, the most commonly used tool in drywall finishing operations, produced significantly more dust of both size classes than did the other three tools. When compared with the block sander, the other tools offer substantial dust reduction. The ventilated tool reduced respirable concentrations by 88% and thoracic concentrations by 85%. The pole sander reduced respirable concentrations by 58% and thoracic by 50%. The wet sander produced reductions of 60% and 47% in the respirable and thoracic classes, respectively. Wet sponge sanders and pole sanders are effective at reducing breathing-zone dust concentrations; however, based on its superior dust control effectiveness, the ventilated sander is the recommended tool for drywall finishing operations.
Closed-form solutions for a class of optimal quadratic regulator problems with terminal constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Turner, J. D.; Chun, H. M.
1984-01-01
Closed-form solutions are derived for coupled Riccati-like matrix differential equations describing the solution of a class of optimal finite time quadratic regulator problems with terminal constraints. Analytical solutions are obtained for the feedback gains and the closed-loop response trajectory. A computational procedure is presented which introduces new variables for efficient computation of the terminal control law. Two examples are given to illustrate the validity and usefulness of the theory.
Resource Allocation and Seed Size Selection in Perennial Plants under Pollen Limitation.
Huang, Qiaoqiao; Burd, Martin; Fan, Zhiwei
2017-09-01
Pollen limitation may affect resource allocation patterns in plants, but its role in the selection of seed size is not known. Using an evolutionarily stable strategy model of resource allocation in perennial iteroparous plants, we show that under density-independent population growth, pollen limitation (i.e., a reduction in ovule fertilization rate) should increase the optimal seed size. At any level of pollen limitation (including none), the optimal seed size maximizes the ratio of juvenile survival rate to the resource investment needed to produce one seed (including both ovule production and seed provisioning); that is, the optimum maximizes the fitness effect per unit cost. Seed investment may affect allocation to postbreeding adult survival. In our model, pollen limitation increases individual seed size but decreases overall reproductive allocation, so that pollen limitation should also increase the optimal allocation to postbreeding adult survival. Under density-dependent population growth, the optimal seed size is inversely proportional to ovule fertilization rate. However, pollen limitation does not affect the optimal allocation to postbreeding adult survival and ovule production. These results highlight the importance of allocation trade-offs in the effect pollen limitation has on the ecology and evolution of seed size and postbreeding adult survival in perennial plants.
Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.
Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T
2013-01-01
Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.
Esfahani, Mohammad Shahrokh; Dougherty, Edward R
2015-01-01
Phenotype classification via genomic data is hampered by small sample sizes that negatively impact classifier design. Utilization of prior biological knowledge in conjunction with training data can improve both classifier design and error estimation via the construction of the optimal Bayesian classifier. In the genomic setting, gene/protein signaling pathways provide a key source of biological knowledge. Although these pathways are neither complete, nor regulatory, with no timing associated with them, they are capable of constraining the set of possible models representing the underlying interaction between molecules. The aim of this paper is to provide a framework and the mathematical tools to transform signaling pathways to prior probabilities governing uncertainty classes of feature-label distributions used in classifier design. Structural motifs extracted from the signaling pathways are mapped to a set of constraints on a prior probability on a Multinomial distribution. Being the conjugate prior for the Multinomial distribution, we propose optimization paradigms to estimate the parameters of a Dirichlet distribution in the Bayesian setting. The performance of the proposed methods is tested on two widely studied pathways: mammalian cell cycle and a p53 pathway model.
Networks of channels for self-healing composite materials
NASA Astrophysics Data System (ADS)
Bejan, A.; Lorente, S.; Wang, K.-M.
2006-08-01
This is a fundamental study of how to vascularize a self-healing composite material so that healing fluid reaches all the crack sites that may occur randomly through the material. The network of channels is built into the material and is filled with pressurized healing fluid. When a crack forms, the pressure drops at the crack site and fluid flows from the network into the crack. The objective is to discover the network configuration that is capable of delivering fluid to all the cracks the fastest. The crack site dimension and the total volume of the channels are fixed. It is argued that the network must be configured as a grid and not as a tree. Two classes of grids are considered and optimized: (i) grids with one channel diameter and regular polygonal loops (square, triangle, hexagon) and (ii) grids with two channel sizes. The best architecture of type (i) is the grid with triangular loops. The best architecture of type (ii) has a particular (optimal) ratio of diameters that departs from 1 as the crack length scale becomes smaller than the global scale of the vascularized structure from which the crack draws its healing fluid. The optimization of the ratio of channel diameters cuts in half the time of fluid delivery to the crack.
Neural network for nonsmooth pseudoconvex optimization with general convex constraints.
Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping
2018-05-01
In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Adesina, Simeon K.; Wight, Scott A.; Akala, Emmanuel O.
2015-01-01
Purpose Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize crosslinked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Methods Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Results and Conclusion Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the crosslinking agent and stabilizer indicate the important factors for minimizing particle size. PMID:24059281
Adesina, Simeon K; Wight, Scott A; Akala, Emmanuel O
2014-11-01
Nanoparticle size is important in drug delivery. Clearance of nanoparticles by cells of the reticuloendothelial system has been reported to increase with increase in particle size. Further, nanoparticles should be small enough to avoid lung or spleen filtering effects. Endocytosis and accumulation in tumor tissue by the enhanced permeability and retention effect are also processes that are influenced by particle size. We present the results of studies designed to optimize cross-linked biodegradable stealth polymeric nanoparticles fabricated by dispersion polymerization. Nanoparticles were fabricated using different amounts of macromonomer, initiators, crosslinking agent and stabilizer in a dioxane/DMSO/water solvent system. Confirmation of nanoparticle formation was by scanning electron microscopy (SEM). Particle size was measured by dynamic light scattering (DLS). D-optimal mixture statistical experimental design was used for the experimental runs, followed by model generation (Scheffe polynomial) and optimization with the aid of a computer software. Model verification was done by comparing particle size data of some suggested solutions to the predicted particle sizes. Data showed that average particle sizes follow the same trend as predicted by the model. Negative terms in the model corresponding to the cross-linking agent and stabilizer indicate the important factors for minimizing particle size.
A Latent Class Regression Analysis of Men's Conformity to Masculine Norms and Psychological Distress
ERIC Educational Resources Information Center
Wong, Y. Joel; Owen, Jesse; Shea, Munyi
2012-01-01
How are specific dimensions of masculinity related to psychological distress in specific groups of men? To address this question, the authors used latent class regression to assess the optimal number of latent classes that explained differential relationships between conformity to masculine norms and psychological distress in a racially diverse…
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
NASA Astrophysics Data System (ADS)
Au, How Meng
The aircraft design process traditionally starts with a given set of top-level requirements. These requirements can be aircraft performance related such as the fuel consumption, cruise speed, or takeoff field length, etc., or aircraft geometry related such as the cabin height or cabin volume, etc. This thesis proposes a new aircraft design process in which some of the top-level requirements are not explicitly specified. Instead, these previously specified parameters are now determined through the use of the Price-Per-Value-Factor (PPVF) index. This design process is well suited for design projects where general consensus of the top-level requirements does not exist. One example is the design of small commuter airliners. The above mentioned value factor is comprised of productivity, cabin volume, cabin height, cabin pressurization, mission fuel consumption, and field length, each weighted to a different exponent. The relative magnitude and positive/negative signs of these exponents are in agreement with general experience. The value factors of the commuter aircraft are shown to have improved over a period of four decades. In addition, the purchase price is shown to vary linearly with the value factor. The initial aircraft sizing process can be manpower intensive if the calculations are done manually. By incorporating automation into the process, the design cycle can be shortened considerably. The Fortran program functions and subroutines in this dissertation, in addition to the design and optimization methodologies described above, contribute to the reduction of manpower required for the initial sizing process. By combining the new design process mentioned above and the PPVF as the objective function, an optimization study is conducted on the design of a 20-seat regional jet. Handbook methods for aircraft design are written into a Fortran code. A genetic algorithm is used as the optimization scheme. The result of the optimization shows that aircraft designed to this PPVF index can be competitive compared to existing turboprop commuter aircraft. The process developed can be applied to other classes of aircraft with the designer modifying the cost function based upon the design goals.
49 CFR 172.446 - CLASS 9 label.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the six white spaces between them. The lower half of the label must be white with the class number “9... 49 Transportation 2 2010-10-01 2010-10-01 false CLASS 9 label. 172.446 Section 172.446... SECURITY PLANS Labeling § 172.446 CLASS 9 label. (a) Except for size and color, the “CLASS 9...
ERIC Educational Resources Information Center
Dixon, Annabelle
1980-01-01
The author, Deputy Head of Chalk Dell Infant School in Hertford, England, reviews research on the effects of class size and analyzes her own experience with a class of 33 and a class of 23 students. (Editor/SJL)
Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia
2014-12-03
A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.
Fetisova, Z G
2004-01-01
In accordance with our concept of rigorous optimization of photosynthetic machinery by a functional criterion, this series of papers continues purposeful search in natural photosynthetic units (PSU) for the basic principles of their organization that we predicted theoretically for optimal model light-harvesting systems. This approach allowed us to determine the basic principles for the organization of a PSU of any fixed size. This series of papers deals with the problem of structural optimization of light-harvesting antenna of variable size controlled in vivo by the light intensity during the growth of organisms, which accentuates the problem of antenna structure optimization because optimization requirements become more stringent as the PSU increases in size. In this work, using mathematical modeling for the functioning of natural PSUs, we have shown that the aggregation of pigments of model light-harvesting antenna, being one of universal optimizing factors, furthermore allows controlling the antenna efficiency if the extent of pigment aggregation is a variable parameter. In this case, the efficiency of antenna increases with the size of the elementary antenna aggregate, thus ensuring the high efficiency of the PSU irrespective of its size; i.e., variation in the extent of pigment aggregation controlled by the size of light-harvesting antenna is biologically expedient.
Kern, Maximilian M.; Guzy, Jacquelyn C.; Lovich, Jeffrey E.; Gibbons, J. Whitfield; Dorcas, Michael E.
2016-01-01
Because resources are finite, female animals face trade-offs between the size and number of offspring they are able to produce during a single reproductive event. Optimal egg size (OES) theory predicts that any increase in resources allocated to reproduction should increase clutch size with minimal effects on egg size. Variations of OES predict that egg size should be optimized, although not necessarily constant across a population, because optimality is contingent on maternal phenotypes, such as body size and morphology, and recent environmental conditions. We examined the relationships among body size variables (pelvic aperture width, caudal gap height, and plastron length), clutch size, and egg width of diamondback terrapins from separate but proximate populations at Kiawah Island and Edisto Island, South Carolina. We found that terrapins do not meet some of the predictions of OES theory. Both populations exhibited greater variation in egg size among clutches than within, suggesting an absence of optimization except as it may relate to phenotype/habitat matching. We found that egg size appeared to be constrained by more than just pelvic aperture width in Kiawah terrapins but not in the Edisto population. Terrapins at Edisto appeared to exhibit osteokinesis in the caudal region of their shells, which may aid in the oviposition of large eggs.
Danisman, Selahattin; van der Wal, Froukje; Dhondt, Stijn; Waites, Richard; de Folter, Stefan; Bimbo, Andrea; van Dijk, Aalt DJ; Muino, Jose M.; Cutri, Lucas; Dornelas, Marcelo C.; Angenent, Gerco C.; Immink, Richard G.H.
2012-01-01
TEOSINTE BRANCHED1/CYCLOIDEA/PROLIFERATING CELL FACTOR1 (TCP) transcription factors control developmental processes in plants. The 24 TCP transcription factors encoded in the Arabidopsis (Arabidopsis thaliana) genome are divided into two classes, class I and class II TCPs, which are proposed to act antagonistically. We performed a detailed phenotypic analysis of the class I tcp20 mutant, showing an increase in leaf pavement cell sizes in 10-d-old seedlings. Subsequently, a glucocorticoid receptor induction assay was performed, aiming to identify potential target genes of the TCP20 protein during leaf development. The LIPOXYGENASE2 (LOX2) and class I TCP9 genes were identified as TCP20 targets, and binding of TCP20 to their regulatory sequences could be confirmed by chromatin immunoprecipitation analyses. LOX2 encodes for a jasmonate biosynthesis gene, which is also targeted by class II TCP proteins that are under the control of the microRNA JAGGED AND WAVY (JAW), although in an antagonistic manner. Mutation of TCP9, the second identified TCP20 target, resulted in increased pavement cell sizes during early leaf developmental stages. Analysis of senescence in the single tcp9 and tcp20 mutants and the tcp9tcp20 double mutants showed an earlier onset of this process in comparison with wild-type control plants in the double mutant only. Both the cell size and senescence phenotypes are opposite to the known class II TCP mutant phenotype in JAW plants. Altogether, these results point to an antagonistic function of class I and class II TCP proteins in the control of leaf development via the jasmonate signaling pathway. PMID:22718775
Low-SWAP Lidar Instrument for Arctic Ice Sheet Mass Balance Monitoring Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, George; Barsic, David
To meet the need to obtain statistically significant data in the North Slope of Alaska (NSA) in support of climate models, Voxtel is developing an nmanned-aircraft-system (UAS)-optimized lidar focal plane array (FPA) and lidar instrument design that integrates the most recent developments in optics, electronics, and computing. Bound by the size, weight, and power (SWAP) budget of low altitude/long endurance (LALE) small UAS (SUAS) platforms—a design tradeoff study was conducted. The class of SUAS considered typically: operates at altitudes between 150 meters and 2,000 meters; accommodates payloads weighing less than 5 kg; encompasses no more than 4,000 cm3 of space;more » and consumes no more than 50 watts of power. To address the SWAP constraints, a lowpower standalone strap-down (gimbal-less) lidar was developed based on single-photon-counting silicon avalanche photodiodes. To reduce SWAP, a lidar FPA design capable of simultaneous imaging and lidar was developed. The 532-nm-optimized FPA modular design was developed for easy integration, as a lidar payload, in any of a variety of SUAS platforms.« less
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Feinstein, Wei P; Brylinski, Michal
2015-01-01
Computational approaches have emerged as an instrumental methodology in modern research. For example, virtual screening by molecular docking is routinely used in computer-aided drug discovery. One of the critical parameters for ligand docking is the size of a search space used to identify low-energy binding poses of drug candidates. Currently available docking packages often come with a default protocol for calculating the box size, however, many of these procedures have not been systematically evaluated. In this study, we investigate how the docking accuracy of AutoDock Vina is affected by the selection of a search space. We propose a new procedure for calculating the optimal docking box size that maximizes the accuracy of binding pose prediction against a non-redundant and representative dataset of 3,659 protein-ligand complexes selected from the Protein Data Bank. Subsequently, we use the Directory of Useful Decoys, Enhanced to demonstrate that the optimized docking box size also yields an improved ranking in virtual screening. Binding pockets in both datasets are derived from the experimental complex structures and, additionally, predicted by eFindSite. A systematic analysis of ligand binding poses generated by AutoDock Vina shows that the highest accuracy is achieved when the dimensions of the search space are 2.9 times larger than the radius of gyration of a docking compound. Subsequent virtual screening benchmarks demonstrate that this optimized docking box size also improves compound ranking. For instance, using predicted ligand binding sites, the average enrichment factor calculated for the top 1 % (10 %) of the screening library is 8.20 (3.28) for the optimized protocol, compared to 7.67 (3.19) for the default procedure. Depending on the evaluation metric, the optimal docking box size gives better ranking in virtual screening for about two-thirds of target proteins. This fully automated procedure can be used to optimize docking protocols in order to improve the ranking accuracy in production virtual screening simulations. Importantly, the optimized search space systematically yields better results than the default method not only for experimental pockets, but also for those predicted from protein structures. A script for calculating the optimal docking box size is freely available at www.brylinski.org/content/docking-box-size. Graphical AbstractWe developed a procedure to optimize the box size in molecular docking calculations. Left panel shows the predicted binding pose of NADP (green sticks) compared to the experimental complex structure of human aldose reductase (blue sticks) using a default protocol. Right panel shows the docking accuracy using an optimized box size.
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Waters, W. Allen; Singer, Thomas N.; Haftka, Raphael T.
2004-01-01
A next generation reusable launch vehicle (RLV) will require thermally efficient and light-weight cryogenic propellant tank structures. Since these tanks will be weight-critical, analytical tools must be developed to aid in sizing the thickness of insulation layers and structural geometry for optimal performance. Finite element method (FEM) models of the tank and insulation layers were created to analyze the thermal performance of the cryogenic insulation layer and thermal protection system (TPS) of the tanks. The thermal conditions of ground-hold and re-entry/soak-through for a typical RLV mission were used in the thermal sizing study. A general-purpose nonlinear FEM analysis code, capable of using temperature and pressure dependent material properties, was used as the thermal analysis code. Mechanical loads from ground handling and proof-pressure testing were used to size the structural geometry of an aluminum cryogenic tank wall. Nonlinear deterministic optimization and reliability optimization techniques were the analytical tools used to size the geometry of the isogrid stiffeners and thickness of the skin. The results from the sizing study indicate that a commercial FEM code can be used for thermal analyses to size the insulation thicknesses where the temperature and pressure were varied. The results from the structural sizing study show that using combined deterministic and reliability optimization techniques can obtain alternate and lighter designs than the designs obtained from deterministic optimization methods alone.
McCaffery, Kirsten J; Dixon, Ann; Hayen, Andrew; Jansen, Jesse; Smith, Sian; Simpson, Judy M
2012-01-01
To test optimal graphic risk communication formats for presenting small probabilities using graphics with a denominator of 1000 to adults with lower education and literacy. A randomized experimental study, which took place in adult basic education classes in Sydney, Australia. The participants were 120 adults with lower education and literacy. An experimental computer-based manipulation compared 1) pictographs in 2 forms, shaded "blocks" and unshaded "dots"; and 2) bar charts across different orientations (horizontal/vertical) and numerator size (small <100, medium 100-499, large 500-999). Accuracy (size of error) and ease of processing (reaction time) were assessed on a gist task (estimating the larger chance of survival) and a verbatim task (estimating the size of difference). Preferences for different graph types were also assessed. Accuracy on the gist task was very high across all conditions (>95%) and not tested further. For the verbatim task, optimal graph type depended on the numerator size. For small numerators, pictographs resulted in fewer errors than bar charts (blocks: odds ratio [OR] = 0.047, 95% confidence interval [CI] = 0.023-0.098; dots: OR = 0.049, 95% CI = 0.024-0.099). For medium and large numerators, bar charts were more accurate (e.g., medium dots: OR = 4.29, 95% CI = 2.9-6.35). Pictographs were generally processed faster for small numerators (e.g., blocks: 14.9 seconds v. bars: 16.2 seconds) and bar charts for medium or large numerators (e.g., large blocks: 41.6 seconds v. 26.7 seconds). Vertical formats were processed slightly faster than horizontal graphs with no difference in accuracy. Most participants preferred bar charts (64%); however, there was no relationship with performance. For adults with low education and literacy, pictographs are likely to be the best format to use when displaying small numerators (<100/1000) and bar charts for larger numerators (>100/1000).
Optimal observables for multiparameter seismic tomography
NASA Astrophysics Data System (ADS)
Bernauer, Moritz; Fichtner, Andreas; Igel, Heiner
2014-08-01
We propose a method for the design of seismic observables with maximum sensitivity to a target model parameter class, and minimum sensitivity to all remaining parameter classes. The resulting optimal observables thereby minimize interparameter trade-offs in multiparameter inverse problems. Our method is based on the linear combination of fundamental observables that can be any scalar measurement extracted from seismic waveforms. Optimal weights of the fundamental observables are determined with an efficient global search algorithm. While most optimal design methods assume variable source and/or receiver positions, our method has the flexibility to operate with a fixed source-receiver geometry, making it particularly attractive in studies where the mobility of sources and receivers is limited. In a series of examples we illustrate the construction of optimal observables, and assess the potentials and limitations of the method. The combination of Rayleigh-wave traveltimes in four frequency bands yields an observable with strongly enhanced sensitivity to 3-D density structure. Simultaneously, sensitivity to S velocity is reduced, and sensitivity to P velocity is eliminated. The original three-parameter problem thereby collapses into a simpler two-parameter problem with one dominant parameter. By defining parameter classes to equal earth model properties within specific regions, our approach mimics the Backus-Gilbert method where data are combined to focus sensitivity in a target region. This concept is illustrated using rotational ground motion measurements as fundamental observables. Forcing dominant sensitivity in the near-receiver region produces an observable that is insensitive to the Earth structure at more than a few wavelengths' distance from the receiver. This observable may be used for local tomography with teleseismic data. While our test examples use a small number of well-understood fundamental observables, few parameter classes and a radially symmetric earth model, the method itself does not impose such restrictions. It can easily be applied to large numbers of fundamental observables and parameters classes, as well as to 3-D heterogeneous earth models.
Constituents of Quality of Life and Urban Size
ERIC Educational Resources Information Center
Royuela, Vicente; Surinach, Jordi
2005-01-01
Do cities have an optimal size? In seeking to answer this question, various theories, including Optimal City Size Theory, the supply-oriented dynamic approach and the city network paradigm, have been forwarded that considered a city's population size as a determinant of location costs and benefits. However, the generalised growth in wealth that…
A Note on Cluster Effects in Latent Class Analysis
ERIC Educational Resources Information Center
Kaplan, David; Keller, Bryan
2011-01-01
This article examines the effects of clustering in latent class analysis. A comprehensive simulation study is conducted, which begins by specifying a true multilevel latent class model with varying within- and between-cluster sample sizes, varying latent class proportions, and varying intraclass correlations. These models are then estimated under…
Speech Music Discrimination Using Class-Specific Features
2004-08-01
Speech Music Discrimination Using Class-Specific Features Thomas Beierholm...between speech and music . Feature extraction is class-specific and can therefore be tailored to each class meaning that segment size, model orders...interest. Some of the applications of audio signal classification are speech/ music classification [1], acoustical environmental classification [2][3
Statistical Analyses of Femur Parameters for Designing Anatomical Plates.
Wang, Lin; He, Kunjin; Chen, Zhengming
2016-01-01
Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.
Energy Current Cumulants in One-Dimensional Systems in Equilibrium
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Saito, Keiji; Roy, Anjan
2018-06-01
A recent theory based on fluctuating hydrodynamics predicts that one-dimensional interacting systems with particle, momentum, and energy conservation exhibit anomalous transport that falls into two main universality classes. The classification is based on behavior of equilibrium dynamical correlations of the conserved quantities. One class is characterized by sound modes with Kardar-Parisi-Zhang scaling, while the second class has diffusive sound modes. The heat mode follows Lévy statistics, with different exponents for the two classes. Here we consider heat current fluctuations in two specific systems, which are expected to be in the above two universality classes, namely, a hard particle gas with Hamiltonian dynamics and a harmonic chain with momentum conserving stochastic dynamics. Numerical simulations show completely different system-size dependence of current cumulants in these two systems. We explain this numerical observation using a phenomenological model of Lévy walkers with inputs from fluctuating hydrodynamics. This consistently explains the system-size dependence of heat current fluctuations. For the latter system, we derive the cumulant-generating function from a more microscopic theory, which also gives the same system-size dependence of cumulants.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-01-01
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k-nearest neighbor (k-NN) method produced the greatest accuracy. A geostatistically-weighted k-NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer’s and user’s accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas. PMID:27879661
Patient satisfaction surveys as a market research tool for general practices.
Khayat, K; Salter, B
1994-05-01
Recent policy developments, embracing the notions of consumer choice, quality of care, and increased general practitioner control over practice budgets have resulted in a new competitive environment in primary care. General practitioners must now be more aware of how their patients feel about the services they receive, and patient satisfaction surveys can be an effective tool for general practices. A survey was undertaken to investigate the use of a patient satisfaction survey and whether aspects of patient satisfaction varied according to sociodemographic characteristics such as age, sex, social class, housing tenure and length of time in education. A sample of 2173 adults living in Medway District Health Authority were surveyed by postal questionnaire in September 1991 in order to elicit their views on general practice services. Levels of satisfaction varied with age, with younger people being consistently less satisfied with general practice services than older people. Women, those in social classes 1-3N, home owners and those who left school aged 17 years or older were more critical of primary care services than men, those in social classes 3M-5, tenants and those who left school before the age of 17 years. Surveys and analyses of this kind, if conducted for a single practice, can form the basis of a marketing strategy aimed at optimizing list size, list composition, and service quality. Satisfaction surveys can be readily incorporated into medical audit and financial management.
Accuracy assessment of percent canopy cover, cover type, and size class
H. T. Schreuder; S. Bain; R. C. Czaplewski
2003-01-01
Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...
ERIC Educational Resources Information Center
Flessa, Joseph J.
2012-01-01
Previous work on policy implementation has often suggested that schools leave their "thumbprints" on policies received from above. During the implementation of Primary Class Size Reduction (PCS) Initiative in Ontario, Canada, however, school principals spoke with remarkable uniformity about the ways PCS affected their work. This article…
ERIC Educational Resources Information Center
Mascall, Blair; Leung, Joannie
2012-01-01
In a study of Ontario, Canada's province-wide Primary Class Size Reduction (PCS) Initiative, school districts' ability to direct and support schools was related to their experience with planning and monitoring, interest in innovation, and its human and fiscal resource base. Districts with greater "resource capacity" were able to…
ERIC Educational Resources Information Center
Shin, Yongyun; Raudenbush, Stephen W.
2011-01-01
This article addresses three questions: Does reduced class size cause higher academic achievement in reading, mathematics, listening, and word recognition skills? If it does, how large are these effects? Does the magnitude of such effects vary significantly across schools? The authors analyze data from Tennessee's Student/Teacher Achievement Ratio…
Relationship between Class Size and Students' Opportunity to Learn Writing in Middle School
ERIC Educational Resources Information Center
Tienken, Christopher H.; Achilles, Charles M.
2009-01-01
Class-size reduction (CSR) initiatives have demonstrated positive short- and long-term effects in elementary grades. Less is known about CSR influence on achievement in middle grades. Thus, we conducted a non-experimental, longitudinal, explanatory study of CSR influence on writing achievement of 3 independent cohorts of students (n = 123) in…
An Evaluation of the Federal Class-Size Reduction Program in Wake County, North Carolina--1999-2000.
ERIC Educational Resources Information Center
Scudder, David F.
An empirical evaluation of the federal class-size reduction (CSR) program in Wake County, North Carolina, during the 1999-2000 school year is presented. The qualitative process evaluation showed implementation issues involving the mechanics and the meaning of CSR. Often, schools did not understand where CSR occurred because of changing enrollment…
Learning Approaches and Lecture Attendance of Medical Students
ERIC Educational Resources Information Center
Bates, Madeleine; Curtis, Sally; Dismore, Harriet
2018-01-01
There are arguably many factors that affect the way a student learns. A recent report by the Higher Education Policy Institute (HEPI) and the Higher Education Academy (HEA) on student academic experience in the UK states that class size is an important factor in the quality of the student experience and that smaller class sizes provide greater…
A Plan for the Evaluation of California's Class Size Reduction Initiative.
ERIC Educational Resources Information Center
Kirst, Michael; Bomstedt, George; Stecher, Brian
In July 1996, California began its Class Size Reduction (CSR) Initiative. To gauge the effectiveness of this initiative, an analysis of its objectives and an overview of proposed strategies for evaluating CSR are presented here. An outline of the major challenges that stand between CSR and its mission are provided. These include logistical…
Ontario's Primary Class Size Reduction Initiative: Report on Early Implementation
ERIC Educational Resources Information Center
Bascia, Nina
2010-01-01
Reduction in the size of classes from Kindergarten to Grade 3 was a major Liberal Party campaign promise in Ontario's 2003 provincial election. It was intended to demonstrate a new government's commitment to improving public education. By the 2008-09 school year, the provincial government's goals had been achieved: over 90% of all primary classes…
ERIC Educational Resources Information Center
Graue, M. Elizabeth; Oen, Denise
2009-01-01
Emerging from an evaluation of Wisconsin's Student Achievement Guarantee in Education program (SAGE), a multidimensional program popularly known for its class size reduction component, this article examines SAGE's "lighted schoolhouse" initiative aimed to strengthen links between home and school. Drawing on family focus groups held at…
Study of Cost of Distance Education Institutes with Different Size Classes in India.
ERIC Educational Resources Information Center
Datt, Ruddar
A study of the cost of distance education institutes in India with different size classes involved nine institutions. The sample included 47 percent of total enrollment in distance education institutions in India. The study was restricted to recurring costs and examined the shares of different components of costs and the sources of funding. It…
Using Flexible Busing to Meet Average Class Size Targets
ERIC Educational Resources Information Center
Felt, Andrew J.; Koelemay, Ryan; Richter, Alexander
2008-01-01
This article describes a method of flexible redistricting for K-12 public school districts that allows students from the same geographical region to be bused to different schools, with the goal of meeting average class size (ACS) target ranges. Results of a case study on a geographically large school district comparing this method to a traditional…
Sound Levels in East Texas Schools.
ERIC Educational Resources Information Center
Turner, Aaron Lynn
A survey of sound levels was taken in several Texas schools to determine the amount of noise and sound present by size of class, type of activity, location of building, and the presence of air conditioning and large amounts of glass. The data indicate that class size and relative amounts of glass have no significant bearing on the production of…
The Nevada Class Size Reduction Evaluation Study, 1995.
ERIC Educational Resources Information Center
Nevada State Dept. of Education, Carson City.
A primary purpose for reducing the student-teacher ratio in the early grades is to make students more successful in their later years. This document contains two separate, but interrelated reports that examined two aspects of the 1989 Class Size Reduction (CSR) Act in Nevada. The Act called for a reduction in student-teacher ratios for selected…
Class Size and Student Outcomes: Research and Policy Implications
ERIC Educational Resources Information Center
Chingos, Matthew M.
2013-01-01
Schools across the United States are facing budgetary pressures on a scale not seen in generations. Times of fiscal exigency force policymakers and education practitioners to pay more attention to the return on various categories of public investment in education. The sizes of the classes in which students are educated are often a focus of these…
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Size classes and associated liability limits for fixed onshore oil storage facilities, 1,000 barrels or less capacity. 113.4 Section 113.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS LIABILITY LIMITS FOR...
A Comparison of QEIA and Non-QEIA Schools: Implications of Class Size Reduction
ERIC Educational Resources Information Center
Platt, Louise Carolyn Sater
2013-01-01
The purpose of this research study is to compare student achievement changes between matched QEIA and non-QEIA schools in an effort to infer effects of the most significant feature of QEIA funding, class size reduction. The study addressed the critical question--are there demonstrated, significant differences in student achievement gains between…
An Examination of Class Size Reduction on Teaching and Learning Processes: A Theoretical Perspective
ERIC Educational Resources Information Center
Harfitt, Gary James; Tsui, Amy B. M.
2015-01-01
The question of how class size impacts on student learning has been debated for some time, not least because it has substantial financial implications for educational policy. The strength of this debate notwithstanding, results from numerous international studies have been inconclusive. The study from which this paper stems sought to conceptualise…
ERIC Educational Resources Information Center
Yigit, Nevzat; Alpaslan, Muhammet Mustafa; Cinemre, Yasin; Balcin, Bilal
2017-01-01
This study aims to examine the middle school students' perceptions of the classroom learning environment in the science course in Turkey in terms of school location and class size. In the study the Assessing of Constructivist Learning Environment (ACLE) questionnaire was utilized to map students' perceptions of the classroom learning environment.…
What the Research Tells Us: Class Size Reduction. Information Capsule. Volume 1001
ERIC Educational Resources Information Center
Romanik, Dale
2010-01-01
This Information Capsule examines the background and history in addition to research findings pertaining to class size reduction (CSR). This Capsule concludes that although educational researchers have not definitively agreed upon the effectiveness of CSR, given its almost universal public appeal, there is little doubt it is here to stay in some…
Optical systems integrated modeling
NASA Technical Reports Server (NTRS)
Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck
1992-01-01
An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.
The evolution of island gigantism and body size variation in tortoises and turtles
Jaffe, Alexander L.; Slater, Graham J.; Alfaro, Michael E.
2011-01-01
Extant chelonians (turtles and tortoises) span almost four orders of magnitude of body size, including the startling examples of gigantism seen in the tortoises of the Galapagos and Seychelles islands. However, the evolutionary determinants of size diversity in chelonians are poorly understood. We present a comparative analysis of body size evolution in turtles and tortoises within a phylogenetic framework. Our results reveal a pronounced relationship between habitat and optimal body size in chelonians. We found strong evidence for separate, larger optimal body sizes for sea turtles and island tortoises, the latter showing support for the rule of island gigantism in non-mammalian amniotes. Optimal sizes for freshwater and mainland terrestrial turtles are similar and smaller, although the range of body size variation in these forms is qualitatively greater. The greater number of potential niches in freshwater and terrestrial environments may mean that body size relationships are more complicated in these habitats. PMID:21270022
2013-01-01
Background Nanosuspensions are an important class of delivery system for vaccine adjuvants and drugs. Previously, we developed a nanosuspension consisting of the synthetic TLR4 ligand glucopyranosyl lipid adjuvant (GLA) and dipalmitoyl phosphatidylcholine (DPPC). This nanosuspension is a clinical vaccine adjuvant known as GLA-AF. We examined the effects of DPPC supplier, buffer composition, and manufacturing process on GLA-AF physicochemical and biological activity characteristics. Results DPPC from different suppliers had minimal influence on physicochemical and biological effects. In general, buffered compositions resulted in less particle size stability compared to unbuffered GLA-AF. Microfluidization resulted in rapid particle size reduction after only a few passes, and 20,000 or 30,000 psi processing pressures were more effective at reducing particle size and recovering the active component than 10,000 psi. Sonicated and microfluidized batches maintained good particle size and chemical stability over 6 months, without significantly altering in vitro or in vivo bioactivity of GLA-AF when combined with a recombinant malaria vaccine antigen. Conclusions Microfluidization, compared to water bath sonication, may be an effective manufacturing process to improve the scalability and reproducibility of GLA-AF as it advances further in the clinical development pathway. Various sources of DPPC are suitable to manufacture GLA-AF, but buffered compositions of GLA-AF do not appear to offer stability advantages over the unbuffered composition. PMID:24359024
Akhtar, Naveed; Mian, Ajmal
2017-10-03
We present a principled approach to learn a discriminative dictionary along a linear classifier for hyperspectral classification. Our approach places Gaussian Process priors over the dictionary to account for the relative smoothness of the natural spectra, whereas the classifier parameters are sampled from multivariate Gaussians. We employ two Beta-Bernoulli processes to jointly infer the dictionary and the classifier. These processes are coupled under the same sets of Bernoulli distributions. In our approach, these distributions signify the frequency of the dictionary atom usage in representing class-specific training spectra, which also makes the dictionary discriminative. Due to the coupling between the dictionary and the classifier, the popularity of the atoms for representing different classes gets encoded into the classifier. This helps in predicting the class labels of test spectra that are first represented over the dictionary by solving a simultaneous sparse optimization problem. The labels of the spectra are predicted by feeding the resulting representations to the classifier. Our approach exploits the nonparametric Bayesian framework to automatically infer the dictionary size--the key parameter in discriminative dictionary learning. Moreover, it also has the desirable property of adaptively learning the association between the dictionary atoms and the class labels by itself. We use Gibbs sampling to infer the posterior probability distributions over the dictionary and the classifier under the proposed model, for which, we derive analytical expressions. To establish the effectiveness of our approach, we test it on benchmark hyperspectral images. The classification performance is compared with the state-of-the-art dictionary learning-based classification methods.
Skala, Katherine A; Springer, Andrew E; Sharma, Shreela V; Hoelscher, Deanna M; Kelder, Steven H
2012-05-01
Physical education (PE) classes provide opportunities for children to be active. This study examined the associations between specific environmental characteristics (teacher characteristics; class size, duration and location; and lesson context) and elementary school-aged children's moderate-to-vigorous activity (MVPA) during PE. Environmental characteristics and student activity levels were measured in 211 third-, fourth-, and fifth-grade PE classes in 74 Texas public schools using SOFIT direct observation. Students engaged in less than half their PE class time in MVPA (38%), while approximately 25% of class time was spent in classroom management. Percent time in MVPA was significantly higher in outdoor classes compared with indoors (41.4% vs. 36.1%, P = .037). Larger (P = .044) and longer (P = .001) classes were negatively associated with percentage of MVPA and positively correlated with time spent in management (P < .001). Findings suggest that children's activity may be influenced by environmental factors such as class size, location, and lesson contexts. These findings hold important policy implications for PE class organization and the need for strategies that maximize children's MVPA. Further research is needed to test the causal association of these factors with student MVPA.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
NASA Technical Reports Server (NTRS)
Rash, James
2014-01-01
NASA's space data-communications infrastructure-the Space Network and the Ground Network-provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft. The Space Network operates several orbiting geostationary platforms (the Tracking and Data Relay Satellite System (TDRSS)), each with its own servicedelivery antennas onboard. The Ground Network operates service-delivery antennas at ground stations located around the world. Together, these networks enable data transfer between user spacecraft and their mission control centers on Earth. Scheduling data-communications events for spacecraft that use the NASA communications infrastructure-the relay satellites and the ground stations-can be accomplished today with software having an operational heritage dating from the 1980s or earlier. An implementation of the scheduling methods and algorithms disclosed and formally specified herein will produce globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary algorithms, a class of probabilistic strategies for searching large solution spaces, is the essential technology invoked and exploited in this disclosure. Also disclosed are secondary methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithms themselves. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure within the expected range of future users and space- or ground-based service-delivery assets. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally. The generalized methods and algorithms are applicable to a very broad class of combinatorial-optimization problems that encompasses, among many others, the problem of generating optimal space-data communications schedules.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
Jeankumar, Variam Ullas; Reshma, Rudraraju Srilakshmi; Vats, Rahul; Janupally, Renuka; Saxena, Shalini; Yogeeswari, Perumal; Sriram, Dharmarajan
2016-10-21
A structure based medium throughput virtual screening campaign of BITS-Pilani in house chemical library to identify novel binders of Mycobacterium tuberculosis gyrase ATPase domain led to the discovery of a quinoline scaffold. Further medicinal chemistry explorations on the right hand core of the early hit, engendered a potent lead demonstrating superior efficacy both in the enzyme and whole cell screening assay. The binding affinity shown at the enzyme level was further corroborated by biophysical characterization techniques. Early pharmacokinetic evaluation of the optimized analogue was encouraging and provides interesting potential for further optimization. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Jia; Zhang, Min; Zhou, Xiaoling; Chen, Jianhua; Tian, Yuxin
2018-01-01
Taken 4 main tree species in the Wuling mountain small watershed as research objects, 57 typical sample plots were set up according to the stand type, site conditions and community structure. 311 goal diameter-class sample trees were selected according to diameter-class groups of different tree-height grades, and the optimal fitting models of tree height and DBH growth of main tree species were obtained by stem analysis using Richard, Logistic, Korf, Mitscherlich, Schumacher, Weibull theoretical growth equations, and the correlation coefficient of all optimal fitting models reached above 0.9. Through the evaluation and test, the optimal fitting models possessed rather good fitting precision and forecast dependability.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
Analogue of the Kelley condition for optimal systems with retarded control
NASA Astrophysics Data System (ADS)
Mardanov, Misir J.; Melikov, Telman K.
2017-07-01
In this paper, we consider an optimal control problem with retarded control and study a larger class of singular (in the classical sense) controls. The Kelley and equality type optimality conditions are obtained. To prove our main results, we use the Legendre polynomials as variations of control.
Automating Structural Analysis of Spacecraft Vehicles
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2004-01-01
A major effort within NASA's vehicle analysis discipline has been to automate structural analysis and sizing optimization during conceptual design studies of advanced spacecraft. Traditional spacecraft structural sizing has involved detailed finite element analysis (FEA) requiring large degree-of-freedom (DOF) finite element models (FEM). Creation and analysis of these models can be time consuming and limit model size during conceptual designs. The goal is to find an optimal design that meets the mission requirements but produces the lightest structure. A structural sizing tool called HyperSizer has been successfully used in the conceptual design phase of a reusable launch vehicle and planetary exploration spacecraft. The program couples with FEA to enable system level performance assessments and weight predictions including design optimization of material selections and sizing of spacecraft members. The software's analysis capabilities are based on established aerospace structural methods for strength, stability and stiffness that produce adequately sized members and reliable structural weight estimates. The software also helps to identify potential structural deficiencies early in the conceptual design so changes can be made without wasted time. HyperSizer's automated analysis and sizing optimization increases productivity and brings standardization to a systems study. These benefits will be illustrated in examining two different types of conceptual spacecraft designed using the software. A hypersonic air breathing, single stage to orbit (SSTO), reusable launch vehicle (RLV) will be highlighted as well as an aeroshell for a planetary exploration vehicle used for aerocapture at Mars. By showing the two different types of vehicles, the software's flexibility will be demonstrated with an emphasis on reducing aeroshell structural weight. Member sizes, concepts and material selections will be discussed as well as analysis methods used in optimizing the structure. Analysis based on the HyperSizer structural sizing software will be discussed. Design trades required to optimize structural weight will be presented.
A sectional denture as the optimal prosthesis.
Cohen, K
1989-08-01
A case is described where because of various local factors--anterior ridge loss, Class III skeletal relationship, survey lines, appearance, retention and support problems--a sectional prosthesis was found to be the optimal restoration.
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
Selective Predation of a Stalking Predator on Ungulate Prey
Heurich, Marco; Zeis, Klara; Küchenhoff, Helmut; Müller, Jörg; Belotti, Elisa; Bufka, Luděk; Woelfing, Benno
2016-01-01
Prey selection is a key factor shaping animal populations and evolutionary dynamics. An optimal forager should target prey that offers the highest benefits in terms of energy content at the lowest costs. Predators are therefore expected to select for prey of optimal size. Stalking predators do not pursue their prey long, which may lead to a more random choice of prey individuals. Due to difficulties in assessing the composition of available prey populations, data on prey selection of stalking carnivores are still scarce. We show how the stalking predator Eurasian lynx (Lynx lynx) selects prey individuals based on species identity, age, sex and individual behaviour. To address the difficulties in assessing prey population structure, we confirm inferred selection patterns by using two independent data sets: (1) data of 387 documented kills of radio-collared lynx were compared to the prey population structure retrieved from systematic camera trapping using Manly’s standardized selection ratio alpha and (2) data on 120 radio-collared roe deer were analysed using a Cox proportional hazards model. Among the larger red deer prey, lynx selected against adult males—the largest and potentially most dangerous prey individuals. In roe deer lynx preyed selectively on males and did not select for a specific age class. Activity during high risk periods reduced the risk of falling victim to a lynx attack. Our results suggest that the stalking predator lynx actively selects for size, while prey behaviour induces selection by encounter and stalking success rates. PMID:27548478
Capital dissipation minimization for a class of complex irreversible resource exchange processes
NASA Astrophysics Data System (ADS)
Xia, Shaojun; Chen, Lingen
2017-05-01
A model of a class of irreversible resource exchange processes (REPes) between a firm and a producer with commodity flow leakage from the producer to a competitive market is established in this paper. The REPes are assumed to obey the linear commodity transfer law (LCTL). Optimal price paths for capital dissipation minimization (CDM) (it can measure economic process irreversibility) are obtained. The averaged optimal control theory is used. The optimal REP strategy is also compared with other strategies, such as constant-firm-price operation and constant-commodity-flow operation, and effects of the amount of commodity transferred and the commodity flow leakage on the optimal REP strategy are also analyzed. The commodity prices of both the producer and the firm for the CDM of the REPes with commodity flow leakage change with the time exponentially.
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.
2015-03-01
It is resource-intensive to conduct human studies for task-based assessment of medical image quality and system optimization. Thus, numerical model observers have been developed as a surrogate for human observers. The Hotelling observer (HO) is the optimal linear observer for signal-detection tasks, but the high dimensionality of imaging data results in a heavy computational burden. Channelization is often used to approximate the HO through a dimensionality reduction step, but how to produce channelized images without losing significant image information remains a key challenge. Kernel local Fisher discriminant analysis (KLFDA) uses kernel techniques to perform supervised dimensionality reduction, which finds an embedding transformation that maximizes betweenclass separability and preserves within-class local structure in the low-dimensional manifold. It is powerful for classification tasks, especially when the distribution of a class is multimodal. Such multimodality could be observed in many practical clinical tasks. For example, primary and metastatic lesions may both appear in medical imaging studies, but the distributions of their typical characteristics (e.g., size) may be very different. In this study, we propose to use KLFDA as a novel channelization method. The dimension of the embedded manifold (i.e., the result of KLFDA) is a counterpart to the number of channels in the state-of-art linear channelization. We present a simulation study to demonstrate the potential usefulness of KLFDA for building the channelized HOs (CHOs) and generating reliable decision statistics for clinical tasks. We show that the performance of the CHO with KLFDA channels is comparable to that of the benchmark CHOs.
NASA Astrophysics Data System (ADS)
Dreifuss, Tamar; Betzer, Oshra; Barnoy, Eran; Motiei, Menachem; Popovtzer, Rachela
2018-02-01
Theranostics is an emerging field, defined as combination of therapeutic and diagnostic capabilities in the same material. Nanoparticles are considered as an efficient platform for theranostics, particularly in cancer treatment, as they offer substantial advantages over both common imaging contrast agents and chemotherapeutic drugs. However, the development of theranostic nanoplatforms raises an important question: Is the optimal particle for imaging also optimal for therapy? Are the specific parameters required for maximal drug delivery, similar to those required for imaging applications? Herein, we examined this issue by investigating the effect of nanoparticle size on tumor uptake and imaging. Anti-epidermal growth factor receptor (EGFR)-conjugated gold nanoparticles (GNPs) in different sizes (diameter range: 20-120 nm) were injected to tumor bearing mice and their uptake by tumors was measured, as well as their tumor visualization capabilities as tumor-targeted CT contrast agent. Interestingly, the results showed that different particles led to highest tumor uptake or highest contrast enhancement, meaning that the optimal particle size for drug delivery is not necessarily optimal for tumor imaging. These results have important implications on the design of theranostic nanoplatforms.
Capitalizando en los cursos pequenos (Capitalizing on Small Class Size). ERIC Digest.
ERIC Educational Resources Information Center
O'Connell, Jessica; Smith, Stuart C.
This digest in Spanish examines school districts' efforts to reap the greatest benefit from smaller classes. Although the report discusses teaching strategies that are most effective in small classes, research has shown that teachers do not significantly change their teaching practices when they move from larger to smaller classes. Although…
NASA Astrophysics Data System (ADS)
Liliawati, W.; Purwanto; Zulfikar, A.; Kamal, R. N.
2018-05-01
This study aims to examine the effectiveness of the use of teaching materials based on multiple intelligences on the understanding of high school students’ material on the theme of global warming. The research method used is static-group pretest-posttest design. Participants of the study were 60 high school students of XI class in one of the high schools in Bandung. Participants were divided into two classes of 30 students each for the experimental class and control class. The experimental class uses compound-based teaching materials while the experimental class does not use a compound intelligence-based teaching material. The instrument used is a test of understanding of the concept of global warming with multiple choices form amounted to 15 questions and 5 essay items. The test is given before and after it is applied to both classes. Data analysis using N-gain and effect size. The results obtained that the N-gain for both classes is in the medium category and the effectiveness of the use of teaching materials based on the results of effect-size test results obtained in the high category.
2012-01-01
Background Gum ghatti is a proteinaceous edible, exudate tree gum of India and is also used in traditional medicine. A facile and ecofriendly green method has been developed for the synthesis of silver nanoparticles from silver nitrate using gum ghatti (Anogeissus latifolia) as a reducing and stabilizing agent. The influence of concentration of gum and reaction time on the synthesis of nanoparticles was studied. UV–visible spectroscopy, transmission electron microscopy and X-ray diffraction analytical techniques were used to characterize the synthesized nanoparticles. Results By optimizing the reaction conditions, we could achieve nearly monodispersed and size controlled spherical nanoparticles of around 5.7 ± 0.2 nm. A possible mechanism involved in the reduction and stabilization of nanoparticles has been investigated using Fourier transform infrared spectroscopy and Raman spectroscopy. Conclusions The synthesized silver nanoparticles had significant antibacterial action on both the Gram classes of bacteria. As the silver nanoparticles are encapsulated with functional group rich gum, they can be easily integrated for various biological applications. PMID:22571686
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Quantum chemical modeling of enzymatic reactions: the case of histone lysine methyltransferase.
Georgieva, Polina; Himo, Fahmi
2010-06-01
Quantum chemical cluster models of enzyme active sites are today an important and powerful tool in the study of various aspects of enzymatic reactivity. This methodology has been applied to a wide spectrum of reactions and many important mechanistic problems have been solved. Herein, we report a systematic study of the reaction mechanism of the histone lysine methyltransferase (HKMT) SET7/9 enzyme, which catalyzes the methylation of the N-terminal histone tail of the chromatin structure. In this study, HKMT SET7/9 serves as a representative case to examine the modeling approach for the important class of methyl transfer enzymes. Active site models of different sizes are used to evaluate the methodology. In particular, the dependence of the calculated energies on the model size, the influence of the dielectric medium, and the particular choice of the dielectric constant are discussed. In addition, we examine the validity of some technical aspects, such as geometry optimization in solvent or with a large basis set, and the use of different density functional methods. Copyright 2010 Wiley Periodicals, Inc.
Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter
2010-12-01
Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).
Energy minimization on manifolds for docking flexible molecules
Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima
2015-01-01
In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722
Optimal deployment of thermal energy storage under diverse economic and climate conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, Nicholas; Mendes, Gonçalo; Stadler, Michael
2014-04-01
This paper presents an investigation of the economic benefit of thermal energy storage (TES) for cooling, across a range of economic and climate conditions. Chilled water TES systems are simulated for a large office building in four distinct locations, Miami in the U.S.; Lisbon, Portugal; Shanghai, China; and Mumbai, India. Optimal system size and operating schedules are determined using the optimization model DER-CAM, such that total cost, including electricity and amortized capital costs are minimized. The economic impacts of each optimized TES system is then compared to systems sized using a simple heuristic method, which bases system size as fractionmore » (50percent and 100percent) of total on-peak summer cooling loads. Results indicate that TES systems of all sizes can be effective in reducing annual electricity costs (5percent-15percent) and peak electricity consumption (13percent-33percent). The investigation also indentifies a number of criteria which drive TES investment, including low capital costs, electricity tariffs with high power demand charges and prolonged cooling seasons. In locations where these drivers clearly exist, the heuristically sized systems capture much of the value of optimally sized systems; between 60percent and 100percent in terms of net present value. However, in instances where these drivers are less pronounced, the heuristic tends to oversize systems, and optimization becomes crucial to ensure economically beneficial deployment of TES, increasing the net present value of heuristically sized systems by as much as 10 times in some instances.« less
Ritter, Philippe; Delnoy, Peter Paul H M; Padeletti, Luigi; Lunati, Maurizio; Naegele, Herbert; Borri-Brunetto, Alberto; Silvestre, Jorge
2012-09-01
Non-response rate to cardiac resynchronization therapy (CRT) might be decreased by optimizing device programming. The Clinical Evaluation on Advanced Resynchronization (CLEAR) study aimed to assess the effects of CRT with automatically optimized atrioventricular (AV) and interventricular (VV) delays, based on a Peak Endocardial Acceleration (PEA) signal system. This multicentre, single-blind study randomized patients in a 1 : 1 ratio to CRT optimized either automatically by the PEA-based system, or according to centres' usual practices, mostly by echocardiography. Patients had heart failure (HF) New York Heart Association (NYHA) functional class III/IV, left ventricular ejection fraction (LVEF) <35%, QRS duration >150 or >120 ms with mechanical dyssynchrony. Follow-up was 1 year. The primary endpoint was the proportion of patients who improved their condition at 1 year, based on a composite of all-cause death, HF hospitalizations, NYHA class, and quality of life. In all, 268 patients in sinus rhythm (63% men; mean age: 73.1 ± 9.9 years; mean NYHA: 3.0 ± 0.3; mean LVEF: 27.1 ± 8.1%; and mean QRS duration: 160.1 ± 22.0 ms) were included and 238 patients were randomized, 123 to PEA and 115 to the control group. At 1 year, 76% of patients assigned to PEA were classified as improved, vs. 62% in the control group (P= 0.0285). The percentage of patients with improved NYHA class was significantly (P= 0.0020) higher in the PEA group than in controls. Fatal and non-fatal adverse events were evenly distributed between the groups. PEA-based optimization of CRT in HF patients significantly increased the proportion of patients who improved with therapy, mainly through improved NYHA class, after 1 year of follow-up.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 1: Formulation
NASA Technical Reports Server (NTRS)
1972-01-01
The program formulation for PADS computer program is presented. It can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general-purpose branched trajectory optimization program. In the former use, it has the Space Shuttle Synthesis Program as well as a simplified stage weight module for optimally sizing manned recoverable launch vehicles. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent; the second employs the method of quasilinearization, which requires a starting solution from the first trajectory module.
Class Size and Teacher Load in High School English. New York State English Council Monography No. 8.
ERIC Educational Resources Information Center
Wade, Durlyn E.
To determine the class size and teaching load of secondary teachers of English in New York state, the Research Committee of the State English Council mailed 1,093 questionnaires to chairmen of English Departments in the state's registered public and private secondary schools. The 694 usable replies--representing 4,410 full-time English…
ERIC Educational Resources Information Center
Finn, Jeremy D.
2010-01-01
In 2002, voters in Florida approved a constitutional amendment limiting class sizes in public schools to 18 students in the elementary grades, 22 students in middle grades, and 25 in high school grades. Analyzing statewide achievement data for school districts from 2004-2006 and for schools in 2007, this study purports to find that "mandated…
Factors in the Determination of Cost Effective Class Sizes. Report No. 009-79.
ERIC Educational Resources Information Center
Woods, Nancy A.
A system to determine cost effectiveness of class size should be based on both budgeted and actual expenditures and credit hours at the individual course section level. These two factors, in combination, are often expressed as cost per credit hour, and this statistic forms the primary means of evaluating planned "inputs" against actual "outputs."…
ERIC Educational Resources Information Center
Al Kuwaiti, Ahmed; AlQuraan, Mahmoud; Subbarayalu, Arun Vijay
2016-01-01
Objective: This study aims to investigate the interaction between response rate and class size and its effects on students' evaluation of instructors and the courses offered at a higher education Institution in Saudi Arabia. Study Design: A retrospective study design was chosen. Methods: One thousand four hundred and forty four different courses…
So How Big Is Big? Investigating the Impact of Class Size on Ratings in Student Evaluation
ERIC Educational Resources Information Center
Gannaway, Deanne; Green, Teegan; Mertova, Patricie
2018-01-01
Australian universities have a long history of use of student satisfaction surveys. Their use has expanded and purpose changed over time. The surveys are often viewed as distorted by external influences such as discipline context, class size and year level of participants. This paper reports on the results of a large-scale investigation…
ERIC Educational Resources Information Center
Dibiase, David; Rademacher, Henry J.
2005-01-01
This article explores issues of scalability and sustainability in distance learning. The authors kept detailed records of time they spent teaching a course in geographic information science via the World Wide Web over a six-month period, during which class sizes averaged 49 students. The authors also surveyed students' satisfaction with the…
ERIC Educational Resources Information Center
Mitchell, Douglas E.; Mitchell, Ross E.
This report presents a comprehensive preliminary analysis of how California's Class Size Reduction (CSR) initiative has impacted student achievement during the first 2 years of implementation. The analysis is based on complete student, classroom, and teacher records from 26,126 students in 1,174 classrooms from 83 schools in 8 Southern California…
Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed
NASA Astrophysics Data System (ADS)
Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi
2010-05-01
To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Computational techniques for flows with finite-rate condensation
NASA Technical Reports Server (NTRS)
Candler, Graham V.
1993-01-01
A computational method to simulate the inviscid two-dimensional flow of a two-phase fluid was developed. This computational technique treats the gas phase and each of a prescribed number of particle sizes as separate fluids which are allowed to interact with one another. Thus, each particle-size class is allowed to move through the fluid at its own velocity at each point in the flow field. Mass, momentum, and energy are exchanged between each particle class and the gas phase. It is assumed that the particles do not collide with one another, so that there is no inter-particle exchange of momentum and energy. However, the particles are allowed to grow, and therefore, they may change from one size class to another. Appropriate rates of mass, momentum, and energy exchange between the gas and particle phases and between the different particle classes were developed. A numerical method was developed for use with this equation set. Several test cases were computed and show qualitative agreement with previous calculations.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2010-01-01
Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.
Hybrid Optimization in Urban Traffic Networks
DOT National Transportation Integrated Search
1979-04-01
The hybrid optimization problem is formulated to provide a general theoretical framework for the analysis of a class of traffic control problems which takes into account the role of individual drivers as independent decisionmakers. Different behavior...
Comparison of Extruded and Sonicated Vesicles for Planar Bilayer Self-Assembly
Cho, Nam-Joon; Hwang, Lisa Y.; Solandt, Johan J.R.; Frank, Curtis W.
2013-01-01
Lipid vesicles are an important class of biomaterials that have a wide range of applications, including drug delivery, cosmetic formulations and model membrane platforms on solid supports. Depending on the application, properties of a vesicle population such as size distribution, charge and permeability need to be optimized. Preparation methods such as mechanical extrusion and sonication play a key role in controlling these properties, and yet the effects of vesicle preparation method on vesicular properties and integrity (e.g., shape, size, distribution and tension) remain incompletely understood. In this study, we prepared vesicles composed of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid by either extrusion or sonication, and investigated the effects on vesicle size distribution over time as well as the concomitant effects on the self-assembly of solid-supported planar lipid bilayers. Dynamic light scattering (DLS), quartz crystal microbalance with dissipation (QCM-D) monitoring, fluorescence recovery after photobleaching (FRAP) and atomic force microscopy (AFM) experiments were performed to characterize vesicles in solution as well as their interactions with silicon oxide substrates. Collectively, the data support that sonicated vesicles offer more robust control over the self-assembly of homogenous planar lipid bilayers, whereas extruded vesicles are vulnerable to aging and must be used soon after preparation. PMID:28811437