Using Records of Achievement in Higher Education.
ERIC Educational Resources Information Center
Assiter, Alison, Ed.; Shaw, Eileen, Ed.
This collection of 22 essays examines the use of records of achievement (student profiles or portfolios) in higher and vocational education in the United Kingdom. They include: (1) "Records of Achievement: Background, Definitions, and Uses" (Alison Assiter and Eileen Shaw); (2) "Profiling in Higher Education" (Alison Assiter and Angela Fenwick);…
Higher-order force gradient symplectic algorithms
NASA Astrophysics Data System (ADS)
Chin, Siu A.; Kidwell, Donald W.
2000-12-01
We show that a recently discovered fourth order symplectic algorithm, which requires one evaluation of force gradient in addition to three evaluations of the force, when iterated to higher order, yielded algorithms that are far superior to similarly iterated higher order algorithms based on the standard Forest-Ruth algorithm. We gauge the accuracy of each algorithm by comparing the step-size independent error functions associated with energy conservation and the rotation of the Laplace-Runge-Lenz vector when solving a highly eccentric Kepler problem. For orders 6, 8, 10, and 12, the new algorithms are approximately a factor of 103, 104, 104, and 105 better.
Higher Education Is Key To Achieving MDGs
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
Imagine trying to achieve the Millennium Development Goals (MGDs) without higher education. As key institutions of civil society, universities are uniquely positioned between the communities they serve and the governments they advise. Through the CIDA-funded University Partnerships in Cooperation and Development program, Canadian universities have…
Higher Education Counts: Achieving Results. 2006 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2008 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2007 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2007
2007-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2009 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Achieving Quality Learning in Higher Education.
ERIC Educational Resources Information Center
Nightingale, Peggy; O'Neil, Mike
This volume on quality learning in higher education discusses issues of good practice particularly action learning and Total Quality Management (TQM)-type strategies and illustrates them with seven case studies in Australia and the United Kingdom. Chapter 1 discusses issues and problems in defining quality in higher education. Chapter 2 looks at…
Achievable Polarization for Heat-Bath Algorithmic Cooling.
Rodríguez-Briones, Nayeli Azucena; Laflamme, Raymond
2016-04-29
Pure quantum states play a central role in applications of quantum information, both as initial states for quantum algorithms and as resources for quantum error correction. Preparation of highly pure states that satisfy the threshold for quantum error correction remains a challenge, not only for ensemble implementations like NMR or ESR but also for other technologies. Heat-bath algorithmic cooling is a method to increase the purity of a set of qubits coupled to a bath. We investigated the achievable polarization by analyzing the limit when no more entropy can be extracted from the system. In particular, we give an analytic form for the maximum polarization achievable for the case when the initial state of the qubits is totally mixed, and the corresponding steady state of the whole system. It is, however, possible to reach higher polarization while starting with certain states; thus, our result provides an achievable bound. We also give the number of steps needed to get a specific required polarization. PMID:27176508
The efficient algorithms for achieving Euclidean distance transformation.
Shih, Frank Y; Wu, Yi-Ta
2004-08-01
Euclidean distance transformation (EDT) is used to convert a digital binary image consisting of object (foreground) and nonobject (background) pixels into another image where each pixel has a value of the minimum Euclidean distance from nonobject pixels. In this paper, the improved iterative erosion algorithm is proposed to avoid the redundant calculations in the iterative erosion algorithm. Furthermore, to avoid the iterative operations, the two-scan-based algorithm by a deriving approach is developed for achieving EDT correctly and efficiently in a constant time. Besides, we discover when obstacles appear in the image, many algorithms cannot achieve the correct EDT except our two-scan-based algorithm. Moreover, the two-scan-based algorithm does not require the additional cost of preprocessing or relative-coordinates recording.
Higher Education Counts: Achieving Results. 2009 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results, 2008. Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results. 2006 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the principle vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…
ERIC Educational Resources Information Center
Butz, Stephen D.
2012-01-01
This research examined the education system at high-poverty schools that had significantly higher student achievement levels as compared to similar schools with lower student achievement levels. A multischool qualitative case study was conducted of the educational systems where there was a significant difference in the scores achieved on the…
Achieving Equity in Higher Education: The Unfinished Agenda
ERIC Educational Resources Information Center
Astin, Alexander W.; Astin, Helen S.
2015-01-01
In this retrospective account of their scholarly work over the past 45 years, Alexander and Helen Astin show how the struggle to achieve greater equity in American higher education is intimately connected to issues of character development, leadership, civic responsibility, and spirituality. While shedding some light on a variety of questions…
Achieving Higher Energies via Passively Driven X-band Structures
NASA Astrophysics Data System (ADS)
Sipahi, Taylan; Sipahi, Nihan; Milton, Stephen; Biedron, Sandra
2014-03-01
Due to their higher intrinsic shunt impedance X-band accelerating structures significant gradients with relatively modest input powers, and this can lead to more compact particle accelerators. At the Colorado State University Accelerator Laboratory (CSUAL) we would like to adapt this technology to our 1.3 GHz L-band accelerator system using a passively driven 11.7 GHz traveling wave X-band configuration that capitalizes on the high shunt impedances achievable in X-band accelerating structures in order to increase our overall beam energy in a manner that does not require investment in an expensive, custom, high-power X-band klystron system. Here we provide the design details of the X-band structures that will allow us to achieve our goal of reaching the maximum practical net potential across the X-band accelerating structure while driven solely by the beam from the L-band system.
Algorithm for Determination of Orion Ascent Abort Mode Achievability
NASA Technical Reports Server (NTRS)
Tedesco, Mark B.
2011-01-01
For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.
Achieving Algorithmic Resilience for Temporal Integration through Spectral Deferred Corrections
Grout, R. W.; Kolla, H.; Minion, M. L.; Bell, J. B.
2015-04-06
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.
A consolidation algorithm for genomes fractionated after higher order polyploidization
2012-01-01
Background It has recently been shown that fractionation, the random loss of excess gene copies after a whole genome duplication event, is a major cause of gene order disruption. When estimating evolutionary distances between genomes based on chromosomal rearrangement, fractionation inevitably leads to significant overestimation of classic rearrangement distances. This bias can be largely avoided when genomes are preprocessed by "consolidation", a procedure that identifies and accounts for regions of fractionation. Results In this paper, we present a new consolidation algorithm that extends and improves previous work in several directions. We extend the notion of the fractionation region to use information provided by regions where this process is still ongoing. The new algorithm can optionally work with this new definition of fractionation region and is able to process not only tetraploids but also genomes that have undergone hexaploidization and polyploidization events of higher order. Finally, this algorithm reduces the asymptotic time complexity of consolidation from quadratic to linear dependence on the genome size. The new algorithm is applied both to plant genomes and to simulated data to study the effect of fractionation in ancient hexaploids. PMID:23282012
Charting the course for nurses' achievement of higher education levels.
Kovner, Christine T; Brewer, Carol; Katigbak, Carina; Djukic, Maja; Fatehi, Farida
2012-01-01
To improve patient outcomes and meet the challenges of the U.S. health care system, the Institute of Medicine recommends higher educational attainment for the nursing workforce. Characteristics of registered nurses (RNs) who pursue additional education are poorly understood, and this information is critical to planning long-term strategies for U.S. nursing education. To identify factors predicting enrollment and completion of an additional degree among those with an associate or bachelor's as their pre-RN licensure degree, we performed logistic regression analysis on data from an ongoing nationally representative panel study following the career trajectories of newly licensed RNs. For associate degree RNs, predictors of obtaining a bachelor's degree are the following: being Black, living in a rural area, nonnursing work experience, higher positive affectivity, higher work motivation, working in the intensive care unit, and working the day shift. For bachelor's RNs, predictors of completing a master's degree are the following: being Black, nonnursing work experience, holding more than one job, working the day shift, working voluntary overtime, lower intent to stay at current employer, and higher work motivation. Mobilizing the nurse workforce toward higher education requires integrated efforts from policy makers, philanthropists, employers, and educators to mitigate the barriers to continuing education.
Charting the course for nurses' achievement of higher education levels.
Kovner, Christine T; Brewer, Carol; Katigbak, Carina; Djukic, Maja; Fatehi, Farida
2012-01-01
To improve patient outcomes and meet the challenges of the U.S. health care system, the Institute of Medicine recommends higher educational attainment for the nursing workforce. Characteristics of registered nurses (RNs) who pursue additional education are poorly understood, and this information is critical to planning long-term strategies for U.S. nursing education. To identify factors predicting enrollment and completion of an additional degree among those with an associate or bachelor's as their pre-RN licensure degree, we performed logistic regression analysis on data from an ongoing nationally representative panel study following the career trajectories of newly licensed RNs. For associate degree RNs, predictors of obtaining a bachelor's degree are the following: being Black, living in a rural area, nonnursing work experience, higher positive affectivity, higher work motivation, working in the intensive care unit, and working the day shift. For bachelor's RNs, predictors of completing a master's degree are the following: being Black, nonnursing work experience, holding more than one job, working the day shift, working voluntary overtime, lower intent to stay at current employer, and higher work motivation. Mobilizing the nurse workforce toward higher education requires integrated efforts from policy makers, philanthropists, employers, and educators to mitigate the barriers to continuing education. PMID:23158196
Strategies for Increasing Academic Achievement in Higher Education
ERIC Educational Resources Information Center
Ensign, Julene; Woods, Amelia Mays
2014-01-01
Higher education today faces unique challenges. Decreasing student engagement, increasing diversity, and limited resources all contribute to the issues being faced by students, educators, and administrators alike. The unique characteristics and expectations that students bring to their professional programs require new methods of addressing…
A new adaptive GMRES algorithm for achieving high accuracy
Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-01-01
This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-11-01
This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
ERIC Educational Resources Information Center
Kaminskiene, Lina; Stasiunaitiene, Egle
2013-01-01
The article identifies the validity of assessment of non-formal and informal learning achievements (NILA) as one of the key factors for encouraging further development of the process of assessing and recognising non-formal and informal learning achievements in higher education. The authors analyse why the recognition of non-formal and informal…
Soy Mujer!: A Case Study for Understanding Latina Achievement in Higher Education
ERIC Educational Resources Information Center
Stephens, Elizabeth
2012-01-01
Latinas are one of fastest growing segments of the population in the United States, which clearly shows a need to better understand and support education for Latinas within higher education. This study sought to understand the process for and experience of Latinas' academic achievement within higher education. The study focused particularly…
ERIC Educational Resources Information Center
Arredondo, Patricia; Castillo, Linda G.
2011-01-01
Latina/o student achievement is a priority for the American Association of Hispanics in Higher Education (AAHHE). To date, AAHHE has worked deliberately on this agenda. However, well-established higher education associations such as the Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU) are…
Relationship between Study Habits and Academic Achievement of Higher Secondary School Students
ERIC Educational Resources Information Center
Lawrence, A. S. Arul
2014-01-01
The present study was probed to find the significant relationship between study habits and academic achievement of higher secondary school students with reference to the background variables. Survey method was employed. Data for the study were collected from 300 students in 13 higher secondary schools using Study Habits Inventory by V.G. Anantha…
A general higher-order remap algorithm for ALE calculations
Chiravalle, Vincent P
2011-01-05
A numerical technique for solving the equations of fluid dynamics with arbitrary mesh motion is presented. The three phases of the Arbitrary Lagrangian Eulerian (ALE) methodology are outlined: the Lagrangian phase, grid relaxation phase and remap phase. The Lagrangian phase follows a well known approach from the HEMP code; in addition the strain rate andflow divergence are calculated in a consistent manner according to Margolin. A donor cell method from the SALE code forms the basis of the remap step, but unlike SALE a higher order correction based on monotone gradients is also added to the remap. Four test problems were explored to evaluate the fidelity of these numerical techniques, as implemented in a simple test code, written in the C programming language, called Cercion. Novel cell-centered data structures are used in Cercion to reduce the complexity of the programming and maximize the efficiency of memory usage. The locations of the shock and contact discontinuity in the Riemann shock tube problem are well captured. Cercion demonstrates a high degree of symmetry when calculating the Sedov blast wave solution, with a peak density at the shock front that is similar to the value determined by the RAGE code. For a flyer plate test problem both Cercion and FLAG give virtually the same velocity temporal profile at the target-vacuum interface. When calculating a cylindrical implosion of a steel shell, Cercion and FLAG agree well and the Cercion results are insensitive to the use of ALE.
ERIC Educational Resources Information Center
Schmid, Richard F.; Bernard, Robert M.; Borokhovski, Eugene; Tamim, Rana; Abrami, Philip C.; Wade, C. Anne; Surkes, Michael A.; Lowerison, Gretchen
2009-01-01
This paper reports the findings of a Stage I meta-analysis exploring the achievement effects of computer-based technology use in higher education classrooms (non-distance education). An extensive literature search revealed more than 6,000 potentially relevant primary empirical studies. Analysis of a representative sample of 231 studies (k = 310)…
Leveraging Quality Improvement to Achieve Student Learning Assessment Success in Higher Education
ERIC Educational Resources Information Center
Glenn, Nancy Gentry
2009-01-01
Mounting pressure for transformational change in higher education driven by technology, globalization, competition, funding shortages, and increased emphasis on accountability necessitates that universities implement reforms to demonstrate responsiveness to all stakeholders and to provide evidence of student achievement. In the face of the demand…
ERIC Educational Resources Information Center
Magen-Nagar, Noga
2016-01-01
The purpose of the current study is to explore the effects of learning strategies on Mathematical Literacy (ML) of students in higher and lower achieving countries. To address this issue, the study utilizes PISA2002 data to conduct a multi-level analysis (HLM) of Hong Kong and Israel students. In PISA2002, Israel was rated 31st in Mathematics,…
An Analysis of Factors Influencing the Achievement of Higher Education by Chief Fire Officers
ERIC Educational Resources Information Center
Ditch, Robert L.
2012-01-01
The leadership of the United States Fire Service (FS) believes that higher education increases the professionalism of FS members. The research problem at the research site, which is a multisite fire department located in southeastern United States, was the lack of research-based findings on the factors influencing the achievement of higher…
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Nieto, C A Rosales; Ferguson, M B; Macleay, C A; Briegel, J R; Wood, D A; Martin, G B; Thompson, A N
2013-09-15
We studied the relationships among growth, body composition and reproductive performance in ewe lambs with known phenotypic values for depth of eye muscle (EMD) and fat (FAT) and Australian Sheep Breeding Values for post-weaning live weight (PWT) and depth of eye muscle (PEMD) and fat (PFAT). To detect estrus, vasectomized rams were placed with 190 Merino ewe lambs when on average they were 157 days old. The vasectomized rams were replaced with entire rams when the ewe lambs were, on average, 226 days old. Lambs were weighed every week and blood was sampled on four occasions for assay of ghrelin, leptin and ß-hydroxybutyrate. Almost 90% of the lambs attained puberty during the experiment, at an average live weight of 41.4 kg and average age of 197 days. Ewe lambs with higher values for EMD (P < 0.001), FAT (P < 0.01), PWT (P < 0.001), PEMD (P < 0.05) and PFAT (P < 0.05) were more likely to achieve puberty by 251 days of age. Thirty-six percent of the lambs conceived and, at the estimated date of conception, the average live weight was 46.9 ± 0.6 kg and average age was 273 days. Fertility, fecundity and reproductive rate were positively related to PWT (P < 0.05) and thus live weight at the start of mating (P < 0.001). Reproductive performance was not correlated with blood concentrations of ghrelin, leptin or ß-hydroxybutyrate. Many ewe lambs attained puberty, as detected by vasectomized rams, but then failed to become pregnant after mating with entire rams. Nevertheless, we can conclude that in ewe lambs mated at 8 months of age, higher breeding values for growth, muscle and fat are positively correlated with reproductive performance, although the effects of breeding values and responses to live weight are highly variable.
Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications
Minion, Michael
2014-04-29
The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.
Estes, Annette; Rivera, Vanessa; Bryan, Matthew; Cali, Philip; Dawson, Geraldine
2011-08-01
Academic achievement patterns and their relationships with intellectual ability, social abilities, and problem behavior are described in a sample of 30 higher-functioning, 9-year-old children with autism spectrum disorder (ASD). Both social abilities and problem behavior have been found to be predictive of academic achievement in typically developing children but this has not been well studied in children with ASD. Participants were tested for academic achievement and intellectual ability at age 9. Problem behaviors were assessed through parent report and social functioning through teacher report at age 6 and 9. Significant discrepancies between children's actual academic achievement and their expected achievement based on their intellectual ability were found in 27 of 30 (90%) children. Both lower than expected and higher than expected achievement was observed. Children with improved social skills at age 6 demonstrated higher levels of academic achievement, specifically word reading, at age 9. No relationship was found between children's level of problem behavior and level of academic achievement. These results suggest that the large majority of higher-functioning children with ASD show discrepancies between actual achievement levels and levels predicted by their intellectual ability. In some cases, children are achieving higher than expected, whereas in others, they are achieving lower than expected. Improved social abilities may contribute to academic achievement. Future studies should further explore factors that can promote strong academic achievement, including studies that examine whether intervention to improve social functioning can support academic achievement in children with ASD. PMID:21042871
Estes, Annette; Rivera, Vanessa; Bryan, Matthew; Cali, Philip; Dawson, Geraldine
2011-08-01
Academic achievement patterns and their relationships with intellectual ability, social abilities, and problem behavior are described in a sample of 30 higher-functioning, 9-year-old children with autism spectrum disorder (ASD). Both social abilities and problem behavior have been found to be predictive of academic achievement in typically developing children but this has not been well studied in children with ASD. Participants were tested for academic achievement and intellectual ability at age 9. Problem behaviors were assessed through parent report and social functioning through teacher report at age 6 and 9. Significant discrepancies between children's actual academic achievement and their expected achievement based on their intellectual ability were found in 27 of 30 (90%) children. Both lower than expected and higher than expected achievement was observed. Children with improved social skills at age 6 demonstrated higher levels of academic achievement, specifically word reading, at age 9. No relationship was found between children's level of problem behavior and level of academic achievement. These results suggest that the large majority of higher-functioning children with ASD show discrepancies between actual achievement levels and levels predicted by their intellectual ability. In some cases, children are achieving higher than expected, whereas in others, they are achieving lower than expected. Improved social abilities may contribute to academic achievement. Future studies should further explore factors that can promote strong academic achievement, including studies that examine whether intervention to improve social functioning can support academic achievement in children with ASD.
Leveraging People-Related Maturity Issues for Achieving Higher Maturity and Capability Levels
NASA Astrophysics Data System (ADS)
Buglione, Luigi
During the past 20 years Maturity Models (MM) become a buzzword in the ICT world. Since the initial Crosby's idea in 1979, plenty of models have been created in the Software & Systems Engineering domains, addressing various perspectives. By analyzing the content of the Process Reference Models (PRM) in many of them, it can be noticed that people-related issues have little weight in the appraisals of the capabilities of organizations while in practice they are considered as significant contributors in traditional process and organizational performance appraisals, as stressed instead in well-known Performance Management models such as MBQA, EFQM and BSC. This paper proposes some ways for leveraging people-related maturity issues merging HR practices from several types of maturity models into the organizational Business Process Model (BPM) in order to achieve higher organizational maturity and capability levels.
Virtual Laboratories to Achieve Higher-Order Learning in Fluid Mechanics
NASA Astrophysics Data System (ADS)
Ward, A. S.; Gooseff, M. N.; Toto, R.
2009-12-01
Bloom’s higher-order cognitive skills (analysis, evaluation, and synthesis) are recognized as necessary in engineering education, yet these are difficult to achieve in traditional lecture formats. Laboratory components supplement traditional lectures in an effort to emphasize active learning and provide higher-order challenges, but these laboratories are often subject to the constraints of (a) increasing student enrollment, (b) limited funding for operational, maintenance, and instructional expenses and (c) increasing demands on undergraduate student credit requirements. Here, we present results from a pilot project implementing virtual (or online) laboratory experiences as an alternative to a traditional laboratory experience in Fluid Mechanics, a required third year course. Students and faculty were surveyed to identify the topics that were most difficult, and virtual laboratory and design components developed to supplement lecture material. Each laboratory includes a traditional lab component, requiring student analysis and evaluation. The lab concludes with a design exercise, which imposes additional problem constraints and allows students to apply their laboratory observations to a real-world situation.
Pyramiding B genes in cotton achieves broader but not always higher resistance to bacterial blight.
Essenberg, Margaret; Bayles, Melanie B; Pierce, Margaret L; Verhalen, Laval M
2014-10-01
Near-isogenic lines of upland cotton (Gossypium hirsutum) carrying single, race-specific genes B4, BIn, and b7 for resistance to bacterial blight were used to develop a pyramid of lines with all possible combinations of two and three genes to learn whether the pyramid could achieve broad and high resistance approaching that of L. A. Brinkerhoff's exceptional line Im216. Isogenic strains of Xanthomonas axonopodis pv. malvacearum carrying single avirulence (avr) genes were used to identify plants carrying specific resistance (B) genes. Under field conditions in north-central Oklahoma, pyramid lines exhibited broader resistance to individual races and, consequently, higher resistance to a race mixture. It was predicted that lines carrying two or three B genes would also exhibit higher resistance to race 1, which possesses many avr genes. Although some enhancements were observed, they did not approach the level of resistance of Im216. In a growth chamber, bacterial populations attained by race 1 in and on leaves of the pyramid lines decreased significantly with increasing number of B genes in only one of four experiments. The older lines, Im216 and AcHR, exhibited considerably lower bacterial populations than any of the one-, two-, or three-B-gene lines. A spreading collapse of spray-inoculated AcBIn and AcBInb7 leaves appears to be a defense response (conditioned by BIn) that is out of control. PMID:24655289
Is Equal Access to Higher Education in South Asia and Sub-Saharan Africa Achievable by 2030?
ERIC Educational Resources Information Center
Ilie, Sonia; Rose, Pauline
2016-01-01
Higher education is back in the spotlight, with post-2015 sustainable development goals emphasising equality of access. In this paper, we highlight the long distance still to travel to achieve the goal of equal access to higher education for all, with a focus on poorer countries which tend to have lower levels of enrolment in higher education.…
Harmon, Tyler S; Crabtree, Michael D; Shammas, Sarah L; Posey, Ammon E; Clarke, Jane; Pappu, Rohit V
2016-09-01
Many intrinsically disordered proteins (IDPs) participate in coupled folding and binding reactions and form alpha helical structures in their bound complexes. Alanine, glycine, or proline scanning mutagenesis approaches are often used to dissect the contributions of intrinsic helicities to coupled folding and binding. These experiments can yield confounding results because the mutagenesis strategy changes the amino acid compositions of IDPs. Therefore, an important next step in mutagenesis-based approaches to mechanistic studies of coupled folding and binding is the design of sequences that satisfy three major constraints. These are (i) achieving a target intrinsic alpha helicity profile; (ii) fixing the positions of residues corresponding to the binding interface; and (iii) maintaining the native amino acid composition. Here, we report the development of a G: enetic A: lgorithm for D: esign of I: ntrinsic secondary S: tructure (GADIS) for designing sequences that satisfy the specified constraints. We describe the algorithm and present results to demonstrate the applicability of GADIS by designing sequence variants of the intrinsically disordered PUMA system that undergoes coupled folding and binding to Mcl-1. Our sequence designs span a range of intrinsic helicity profiles. The predicted variations in sequence-encoded mean helicities are tested against experimental measurements. PMID:27503953
ERIC Educational Resources Information Center
Rouse, Martyn; Florian, Lani
2006-01-01
This paper reports on a multi-method study that examined the effects of including higher and lower proportions of students designated as having special educational needs on student achievement in secondary schools. It explores some of the issues involved in conducting such research and considers the extent to which newly available national data in…
ERIC Educational Resources Information Center
Atkinson, Stephanie
2006-01-01
The aim of the study was to investigate the relationship between such factors as learning style, gender, prior experience, and successful achievement in contrasting modules taken by a cohort of thirty design and technology trainee teachers during their degree programme at a University in the North East of England. Achievement data were collected…
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Kimball, Steven M.
2005-01-01
Using standards-based evaluation ratings for nearly 400 teachers, and achievement results for over 7,000 students from grades 4-6, this study investigated the distribution and achievement effects of teacher quality in Washoe County, a mid-sized school district serving Reno and Sparks, Nevada. Classrooms with higher concentrations of minority,…
ERIC Educational Resources Information Center
Estes, Annette; Rivera, Vanessa; Bryan, Matthew; Cali, Philip; Dawson, Geraldine
2011-01-01
Academic achievement patterns and their relationships with intellectual ability, social abilities, and problem behavior are described in a sample of 30 higher-functioning, 9-year-old children with autism spectrum disorder (ASD). Both social abilities and problem behavior have been found to be predictive of academic achievement in typically…
What Is the Best Way to Achieve Broader Reach of Improved Practices in Higher Education?
ERIC Educational Resources Information Center
Kezar, Adrianna
2011-01-01
This article examines a common problem in higher education--how to create more widespread use of improved practices, often commonly referred to as innovations. I argue that policy models of scale-up are often advocated in higher education but that they have a dubious history in community development and K-12 education and that higher education…
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
Diversity and Achievement: Is Success in Higher Education a Transformative Experience?
ERIC Educational Resources Information Center
Benson, Robyn; Heagney, Margaret; Hewitt, Lesley; Crosling, Glenda; Devos, Anita
2014-01-01
This paper reports on a longitudinal project examining how a group of students from diverse backgrounds succeeded in higher education. The project explored participants' pathways into higher education, how they managed their studies, and their reflections at course completion. In this paper, the concept of perspective transformation is used…
Colonialism on Campus: A Critique of Mentoring to Achieve Equity in Higher Education.
ERIC Educational Resources Information Center
Collins, Roger L.
In order to reconceptualize the mentoring relationship in higher education, parallels to colonialist strategies of subordination are drawn. The objective is to stimulate renewed thinking and action more consistent with stated policy goals in higher education. One of the primary functions of a mentor or sponsor is to exercise personal power to…
ERIC Educational Resources Information Center
Catalano, D. Chase J.
2015-01-01
Trans* men have not, as yet, received specific research attention in higher education. Based on intensive interviews with 25 trans* men enrolled in colleges or universities in New England, I explore their experiences in higher education. I analyze participants' descriptions of supports and challenges in their collegiate environments, as well as…
Achievement Investment Prowess: Identifying Cost Efficient Higher Performing Maine Public Schools
ERIC Educational Resources Information Center
Batista, Ida A.
2006-01-01
Throughout the United States the debate has been frequent, intense, and at times adversarial over how to fund education adequately. Maine has been trying to identify higher performing schools in the hope that practices that contribute to success at higher performing schools can be adapted at similar schools throughout the state. The 1997…
The Effects of Higher Education/Military Service on Achievement Levels of Police Academy Cadets.
ERIC Educational Resources Information Center
Johnson, Thomas Allen
This study compared levels of achievement of three groups of Houston (Texas) police academy cadets: those with no military service but with 60 or more college credit hours, those with military service and 0 hours of college credit, and those with military service and 1 to 59 hours of college credit. Prior to 1991, police cadets in Houston were…
ERIC Educational Resources Information Center
Parisi, Joe
2012-01-01
This paper explores several research questions that identify differences between conditionally admitted students and regularly admitted students in terms of achievement results at one institution. The research provides specific variables as well as relationships including historical and comparative aggregate data from 2009 and 2010 that indicate…
ERIC Educational Resources Information Center
Eshetu, Amogne Asfaw
2015-01-01
Gender is among the determinant factors affecting students' academic achievement. This paper tried to investigate the impact of gender on academic performance of preparatory secondary school students based on 2014 EHEECE result. Ex post facto research design was used. To that end, data were collected from 3243 students from eight purposively…
ERIC Educational Resources Information Center
Mc Beth, Maureen
2010-01-01
This study provides important insights into the relationship between the epistemological beliefs of community college students, the selection of learning strategies, and academic achievement. This study employed a quantitative survey design. Data were collected by surveying students at a community college during the spring semester of 2010. The…
The Little District that Could: Literacy Reform Leads to Higher Achievement in California District
ERIC Educational Resources Information Center
Kelly, Patricia R.; Budicin-Senters, Antoinette; King, L. McLean
2005-01-01
This article describes educational reform developed over a 10-year period in California's Lemon Grove School District, which resulted in a steady and remarkable upward shift in achievement for the students of this multicultural district just outside San Diego. Six elements of literacy reform emerged as the most significant factors affecting…
ERIC Educational Resources Information Center
Myers, Carrie B.; Brown, Doreen E.; Pavel, D. Michael
2010-01-01
The purpose of this study was to assess how a comprehensive precollege intervention and developmental program among low-income high school students contributed to college enrollment outcomes measured in 2006. Our focus was on the Fifth Cohort of the Washington State Achievers (WSA) Program, which provides financial, academic, and college…
Success in Higher Education: The Challenge to Achieve Academic Standing and Social Position
ERIC Educational Resources Information Center
Life, James
2015-01-01
When students look at their classmates in the classroom, consciously or unconsciously, they see competitors both for academic recognition and social success. How do they fit in relation to others and how do they succeed in achieving both? Traditional views on the drive to succeed and the fear of failure are well known as motivators for achieving…
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
As Canada's opportunities to claim international leadership are assessed, the best prospects lie in a combination of our impressive higher education and research commitments, civic and institutional values, and quality of life. This paper concludes that as an exporting country, the benefits will come in economic growth. As citizens of the world,…
ERIC Educational Resources Information Center
Ho, Hsuan-Fu; Lin, Ming-Huang; Yang, Cheng-Cheng
2015-01-01
International knowledge and skills are essential for success in today's highly competitive global marketplace. As one of the key providers of such knowledge and skills, universities have become a key focus of the internationalization strategies of governments throughout the world. While the internationalization of higher education clearly has…
Identifying Factors That Affect Higher Educational Achievements of Jamaican Seventh-Day Adventists
ERIC Educational Resources Information Center
Campbell, Samuel P.
2011-01-01
This mixed-method explanatory research examined factors that influenced Jamaican Seventh-day Adventist (SDA) members to pursue higher education. It sought to investigate whether the source of the motivation is tied to the Church's general philosophy on education or to its overall programs as experienced by the membership at large. The question of…
Personality Factors and Achievement Motivation of Women in Higher Education Administration.
ERIC Educational Resources Information Center
Lester, Patricia; Chu, Lily
Female and male higher education administrators in Texas and New Mexico were compared in terms of their sex role orientation, motivational factors, and administrative styles. In addition to individual interviews of the 68 administrators, a questionnaire was developed that included items from the Bem Sex Role Inventory, Work and Family Orientation…
Maryland Higher Education Commission Data Book 2008. Creating a State of Achievement
ERIC Educational Resources Information Center
Maryland Higher Education Commission, 2008
2008-01-01
This document presents statistics about the higher education in Maryland for 2008. The tables in this document are presented according to the following categories: (1) Students; (2) Retention and Graduation; (3) Degrees; (4) Faculty; (5) Revenues and Expenditures; (6) Tuition and Fees; (7) Financial Aid; (8) Private Career Schools; and (9)…
Maryland Higher Education Commission Data Book 2010. Creating a State of Achievement
ERIC Educational Resources Information Center
Maryland Higher Education Commission, 2010
2010-01-01
This document presents statistics about the higher education in Maryland for 2010. The tables in this document are presented according to the following categories: (1) Students; (2) Retention and Graduation; (3) Degrees; (4) Faculty; (5) Revenues and Expenditures; (6) Tuition and Fees; (7) Financial Aid; (8) Private Career Schools; and (9)…
Maryland Higher Education Commission Data Book 2009. Creating a State of Achievement
ERIC Educational Resources Information Center
Maryland Higher Education Commission, 2009
2009-01-01
This document presents statistics about the higher education in Maryland for 2009. The tables in this document are presented according to the following categories: (1) Students; (2) Retention and Graduation; (3) Degrees; (4) Faculty; (5) Revenues and Expenditures; (6) Tuition and Fees; (7) Financial Aid; (8) Private Career Schools; and (9)…
Maryland Higher Education Commission Data Book 2011. Creating a State of Achievement
ERIC Educational Resources Information Center
Maryland Higher Education Commission, 2011
2011-01-01
This document presents statistics about higher education in Maryland for 2011. The tables in this document are presented according to the following categories: (1) Students; (2) Retention and Graduation; (3) Degrees; (4) Faculty; (5) Revenues and Expenditures; (6) Tuition and Fees; (7) Financial Aid; (8) Private Career Schools; and (9) Distance…
Linking Emotional Intelligence to Achieve Technology Enhanced Learning in Higher Education
ERIC Educational Resources Information Center
Kruger, Janette; Blignaut, A. Seugnet
2013-01-01
Higher education institutions (HEIs) increasingly use technology-enhanced learning (TEL) environments (e.g. blended learning and e-learning) to improve student throughput and retention rates. As the demand for TEL courses increases, expectations rise for faculty to meet the challenge of using TEL effectively. The promises that TEL holds have not…
Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, X John
2016-01-01
The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recontruction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR
ERIC Educational Resources Information Center
Klapproth, Florian
2015-01-01
Two objectives guided this research. First, this study examined how well teachers' tracking decisions contribute to the homogenization of their students' achievements. Second, the study explored whether teachers' tracking decisions would be outperformed in homogenizing the students' achievements by statistical models of tracking decisions. These…
ERIC Educational Resources Information Center
Sarwar, Muhammad; Ashrafi, Ghulam Muhammad
2014-01-01
The purpose of this study was to analyze Students' Commitment, Engagement and Locus of Control as predictors of Academic Achievement at Higher Education Level. We used analytical model and conclusive research approach to conduct study and survey method for data collection. We selected 369 students using multistage sampling technique from…
ERIC Educational Resources Information Center
Gulacar, Ozcan; Eilks, Ingo; Bowman, Charles R.
2014-01-01
This paper reports a comparison of a group of higher-and lower-achieving undergraduate chemistry students, 17 in total, as separated on their ability in stoichiometry. This exploratory study of 17 students investigated parallels and differences in the students' general and domain-specific cognitive abilities. Performance, strategies, and…
ERIC Educational Resources Information Center
Dearing, Eric; McCartney, Kathleen; Taylor, Beck A.
2009-01-01
Higher quality child care during infancy and early childhood (6-54 months of age) was examined as a moderator of associations between family economic status and children's (N = 1,364) math and reading achievement in middle childhood (4.5-11 years of age). Low income was less strongly predictive of underachievement for children who had been in…
ERIC Educational Resources Information Center
Keeley, Thomas Allen
2010-01-01
The purpose of this study was to determine whether the areas of teaching methods, teacher-student relationships, school structure, school-community partnerships or school leadership were significantly embedded in practice and acted as a change agent among school systems that achieve higher than expected results on their state standardized testing…
ERIC Educational Resources Information Center
Schlechter, Melissa; Milevsky, Avidan
2010-01-01
The purpose of the current study is to determine the interconnection between parental level of education, psychological well-being, academic achievement and reasons for pursuing higher education in adolescents. Participants included 439 college freshmen from a mid-size state university in the northeastern USA. A survey, including indices of…
Jet algorithms in electron-positron annihilation: perturbative higher order predictions
NASA Astrophysics Data System (ADS)
Weinzierl, Stefan
2011-02-01
This article gives results on several jet algorithms in electron-positron annihilation: Considered are the exclusive sequential recombination algorithms Durham, Geneva, Jade-E0 and Cambridge, which are typically used in electron-positron annihilation. In addition also inclusive jet algorithms are studied. Results are provided for the inclusive sequential recombination algorithms Durham, Aachen and anti- k t , as well as the infrared-safe cone algorithm SISCone. The results are obtained in perturbative QCD and are N3LO for the two-jet rates, NNLO for the three-jet rates, NLO for the four-jet rates and LO for the five-jet rates.
NASA Astrophysics Data System (ADS)
Robila, Stefan A.
2005-03-01
Hyperspectral data is modeled as an unknown mixture of original features (such as the materials present in the scene). The goal is to find the unmixing matrix and to perform the inversion in order to recover them. Unlike first and second order techniques (such as PCA), higher order statistics (HOS) methods assume the data has nongaussian behavior are able to represent much subtle differences among the original features. The HOS algorithms transform the data such that the result components are uncorrelated and their nongaussianity is maximized (the resulting components are statistical independent). Subpixel targets in a natural background can be seen as anomalies of the image scene. They expose a strong nongaussian behavior and correspond to independent components leading to their detection when HOS techniques are employed. The methods presented in this paper start by preprocessing the hyperspectral image through centering and sphering. The resulting bands are transformed using gradient-based optimization on the HOS measure. Next, the data are reduced through a selection of the components associated with small targets using the changes of the slope in the scree graph of the non-Gaussianity values. The targets are filtered using histogram-based analysis. The end result is a map of the pixels associated with small targets.
Young, Michael N.; Hollenbeck, Ryan D.; Pollock, Jeremy S.; Giuseffi, Jennifer L.; Wang, Li; Harrell, Frank E.; McPherson, John A.
2015-01-01
Introduction To determine if higher achieved mean arterial blood pressure (MAP) during treatment with therapeutic hypothermia (TH) is associated with neurologically intact survival following cardiac arrest. Methods Retrospective analysis of a prospectively collected cohort of 188 consecutive patients treated with TH in the cardiovascular intensive care unit of an academic tertiary care hospital. Results Neurologically intact survival was observed in 73/188 (38.8%) patients at hospital discharge and in 48/162 (29.6%) patients at a median follow up interval of 3 months. Patients in shock at the time of admission had lower baseline MAP at the initiation of TH (81 versus 87 mmHg; p=0.002), but had similar achieved MAP during TH (80.3 versus 83.7 mmHg; p=0.11). Shock on admission was associated with poor survival (18% versus 52%; p<0.001). Vasopressor use among all patients was common (84.6%) and was not associated with increased mortality. A multivariable analysis including age, initial rhythm, time to return of spontaneous circulation, baseline MAP and achieved MAP did not demonstrate a relationship between MAP achieved during TH and poor neurologic outcome at hospital discharge (OR 1.28, 95% CI 0.40–4.06; p=0.87) or at outpatient follow up (OR 1.09, 95% CI 0.32–3.75; p=0.976). Conclusion We did not observe a relationship between higher achieved MAP during TH and neurologically intact survival. However, shock at the time of admission was clearly associated with poor outcomes in our study population. These data do not support the use of vasopressors to artificially increase MAP in the absence of shock. There is a need for prospective, randomized trials to further define the optimum blood pressure target during treatment with TH. PMID:25541429
NASA Astrophysics Data System (ADS)
Putro, Budi Laksono; Surendro, Kridanto; Herbert
2016-02-01
Data is a vital asset in a business enterprise in achieving organizational goals. Data and information affect the decision-making process on the various activities of an organization. Data problems include validity, quality, duplication, control over data, and the difficulty of data availability. Data Governance is the way the company / institution manages its data assets. Data Governance covers the rules, policies, procedures, roles and responsibilities, and performance indicators that direct the overall management of data assets. Studies on governance data or information aplenty recommend the importance of cultural factors in the governance of research data. Among the organization's leadership culture has a very close relationship, and there are two concepts turn, namely: Culture created by leaders, leaders created by culture. Based on the above, this study exposure to the theme "Leadership and Culture Of Data Governance For The Achievement Of Higher Education Goals (Case Study: Indonesia University Of Education)". Culture and Leadership Model Development of on Higher Education in Indonesia would be made by comparing several models of data governance, organizational culture, and organizational leadership on previous studies based on the advantages and disadvantages of each model to the existing organizational business. Results of data governance model development is shown in the organizational culture FPMIPA Indonesia University Of Education today is the cultural market and desired culture is a culture of clan. Organizational leadership today is Individualism Index (IDV) (83.72%), and situational leadership on selling position.
NASA Astrophysics Data System (ADS)
Erlick, Katherine
"The stereotype of engineers is that they are not people oriented; the stereotype implies that engineers would not work well in teams---that their task emphasis is a solo venture and does not encourage social aspects of collaboration" (Miner & Beyerlein, 1999, p. 16). The problem is determining the best method of providing a motivating environment where design engineers may contribute within a team in order to achieve higher performance in the organization. Theoretically, self-directed work teams perform at higher levels. But, allowing a design engineer to contribute to the team while still maintaining his or her anonymity is the key to success. Therefore, a motivating environment must be established to encourage greater self-actualization in design engineers. The purpose of this study is to determine the favorable motivational environment for design engineers and describe the comparison between two aerospace design-engineering teams: one self-directed and the other manager directed. Following the comparison, this study identified whether self-direction or manager-direction provides the favorable motivational environment for operating as a team in pursuit of achieving higher performance. The methodology used in this research was the case study focusing on the team's levels of job satisfaction and potential for higher performance. The collection of data came from three sources, (a) surveys, (b) researcher observer journal and (c) collection of artifacts. The surveys provided information regarding personal behavior characteristics, potentiality for higher performance and motivational attributes. The researcher journal provided information regarding team dynamics, individual interaction, conflict and conflict resolution. The milestone for performance was based on the collection of artifacts from the two teams. The findings from this study illustrated that whether the team was manager-directed or self-directed does not appear to influence the needs and wants of the
ERIC Educational Resources Information Center
Stringer, Neil
2008-01-01
Advocates of using a US-style SAT for university selection claim that it is fairer to applicants from disadvantaged backgrounds than achievement tests because it assesses potential, not achievement, and that it allows finer discrimination between top applicants than GCEs. The pros and cons of aptitude tests in principle are discussed, focusing on…
ERIC Educational Resources Information Center
Siahi, Evans Atsiaya; Maiyo, Julius K.
2015-01-01
The studies on the correlation of academic achievement have paved way for control and manipulation of related variables for quality results in schools. In spite of the facts that schools impart uniform classroom instructions to all students, wide range of difference is observed in their academic achievement. The study sought to determine the…
ERIC Educational Resources Information Center
Latha, Prema
2014-01-01
Disturbing sounds are often referred to as noise, and if extreme enough in degree, intensity or frequency, it is referred to as noise pollution. Achievement refers to a change in study behavior in relation to their noise sensitivity and learning in the educational sense by achieving results in changed responses to certain types of stimuli like…
ERIC Educational Resources Information Center
Wright, Bobby
This paper reviews the history of higher education for Native Americans and proposes change strategies. Assimilation was the primary goal of higher education from early colonial times to the 20th century. Tribal response ranged from resistance to support of higher education. When the Federal Government began to dominate Native education in the…
ERIC Educational Resources Information Center
Ehrlich, Jenifer, Ed.
2006-01-01
"Forum Focus" was a semi-annual magazine of the Business-Higher Education Forum (BHEF) that featured articles on the role of business and higher education on significant issues affecting the P-16 education system. The magazine typically focused on themes featured at the most recently held semi-annual Forum meeting at the time of publication.…
Tsanas, Athanasios; Little, Max A.; McSharry, Patrick E.; Ramig, Lorraine O.
2011-01-01
The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments. PMID:21084338
ERIC Educational Resources Information Center
Wurst, Christian; Smarkola, Claudia; Gaffney, Mary Anne
2008-01-01
Three years of graduating business honors cohorts in a large urban university were sampled to determine whether the introduction of ubiquitous laptop computers into the honors program contributed to student achievement, student satisfaction and constructivist teaching activities. The first year cohort consisted of honors students who did not have…
ERIC Educational Resources Information Center
Lorch, Robert F., Jr.; Lorch, Elizabeth P.; Freer, Benjamin Dunham; Dunlap, Emily E.; Hodell, Emily C.; Calderhead, William J.
2014-01-01
Students (n = 1,069) from 60 4th-grade classrooms were taught the control of variables strategy (CVS) for designing experiments. Half of the classrooms were in schools that performed well on a state-mandated test of science achievement, and half were in schools that performed relatively poorly. Three teaching interventions were compared: an…
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
ERIC Educational Resources Information Center
James, Matthew R.
2009-01-01
Leal Filho, MacDermot, and Padgam (1996) contended that post-secondary institutions are well suited to take on leadership responsibilities for society's environmental protection. Higher education has the unique academic freedom to engage in critical thinking and bold experimentation in environmental sustainability (Cortese, 2003). Although…
ERIC Educational Resources Information Center
Houston, Don
2010-01-01
While the past two decades have seen significant expansion and harmonisation of quality assurance mechanisms in higher education, there is limited evidence of positive effects on the quality of core processes of teaching and learning. The paradox of the separation of assurance from improvement is explored. A shift in focus from surveillance to…
ERIC Educational Resources Information Center
Jackson, Norman; Ward, Rob
2004-01-01
This article addresses the challenge of developing new conceptual knowledge to help us make better sense of the way that higher education is approaching the "problem" of representing (documenting, certifying and communicating by other means) students' learning for the super-complex world described by Barnett (2000b). The current UK solution to…
Guijarro-Herraiz, Carlos; Masana-Marin, Luis; Galve, Enrique; Cordero-Fort, Alberto
2014-01-01
Reducing low density lipoprotein-cholesterol (LDL-c) is the main lipid goal of treatment for patients with very high cardiovascular risk. In these patients the therapeutic goal is to achieve a LDL-c lower than 70 mg/dL, as recommended by the guidelines for cardiovascular prevention commonly used in Spain and Europe. However, the degree of achieving these objectives in this group of patients is very low. This article describes the prevalence of the problem and the causes that motivate it. Recommendations and tools that can facilitate the design of an optimal treatment strategy for achieving the goals are also given. In addition, a new tool with a simple algorithm that can allow these very high risk patients to achieve the goals "in two-steps", i.e., with only two doctor check-ups, is presented. PMID:25048471
Guijarro-Herraiz, Carlos; Masana-Marin, Luis; Galve, Enrique; Cordero-Fort, Alberto
2014-01-01
Reducing low density lipoprotein-cholesterol (LDL-c) is the main lipid goal of treatment for patients with very high cardiovascular risk. In these patients the therapeutic goal is to achieve a LDL-c lower than 70 mg/dL, as recommended by the guidelines for cardiovascular prevention commonly used in Spain and Europe. However, the degree of achieving these objectives in this group of patients is very low. This article describes the prevalence of the problem and the causes that motivate it. Recommendations and tools that can facilitate the design of an optimal treatment strategy for achieving the goals are also given. In addition, a new tool with a simple algorithm that can allow these very high risk patients to achieve the goals "in two-steps", i.e., with only two doctor check-ups, is presented.
ERIC Educational Resources Information Center
Ramirez, Dawn Marie
2012-01-01
The purpose of this quantitative study was to examine the factors that affect women administrators in higher education at four-year public and private universities in Texas. By comparing private and public universities, the research provided an assessment of similarities and differences of the factors impacting achievement of women in higher…
Tavares, Eveline Q P; De Souza, Amanda P; Buckeridge, Marcos S
2015-07-01
Cell-wall recalcitrance to hydrolysis still represents one of the major bottlenecks for second-generation bioethanol production. This occurs despite the development of pre-treatments, the prospect of new enzymes, and the production of transgenic plants with less-recalcitrant cell walls. Recalcitrance, which is the intrinsic resistance to breakdown imposed by polymer assembly, is the result of inherent limitations in its three domains. These consist of: (i) porosity, associated with a pectin matrix impairing trafficking through the wall; (ii) the glycomic code, which refers to the fine-structural emergent complexity of cell-wall polymers that are unique to cells, tissues, and species; and (iii) cellulose crystallinity, which refers to the organization in micro- and/or macrofibrils. One way to circumvent recalcitrance could be by following cell-wall hydrolysis strategies underlying plant endogenous mechanisms that are optimized to precisely modify cell walls in planta. Thus, the cell-wall degradation that occurs during fruit ripening, abscission, storage cell-wall mobilization, and aerenchyma formation are reviewed in order to highlight how plants deal with recalcitrance and which are the routes to couple prospective enzymes and cocktail designs with cell-wall features. The manipulation of key enzyme levels in planta can help achieving biologically pre-treated walls (i.e. less recalcitrant) before plants are harvested for bioethanol production. This may be helpful in decreasing the costs associated with producing bioethanol from biomass.
NASA Astrophysics Data System (ADS)
Searson, Robert Francis
This researcher investigated the effects of tactual and kinesthetic instructional resources on the simple recall and higher-level cognitive science achievement and attitudes toward science of third-grade suburban students in a northern New Jersey school district. The Learning Style Inventory (LSI) (Dunn, Dunn, & Price, 1996) was administered to ascertain the identity of the learning-style perceptual preferences of all 59 third-graders who completed the three science units. Each of the three classes was presented two science units using learning-style instructional resources; one science unit was taught using traditional methods. All three science units were completed in a six-week period. Students were administered a pretest and posttest for each science unit and the Semantic Differential Scale (Pizzo, 1981) at the conclusion of each science unit. Analysis of variance (ANOVA) assessed the effects of treatments and attitudes toward science. The statistical analysis of this study revealed a significant difference (p < 0.0001) between students' simple recall science achievement posttest scores when taught tactually and/or kinesthetically compared to when they were taught science traditionally. Furthermore, the Contingency Table analysis, using Fisher's Exact Test indicated a significant difference (p = 0.00008) between the higher-level cognitive science achievement posttest scores when students are taught science tactually and/or kinesthetically compared to when they are taught science traditionally. The findings of this study supported the view when tactual and/or kinesthetic methods were employed, higher achievement gains were realized for simple recall and higher-level cognitive science achievement. Further recommendations called for a reexamination of science instructional methods employed in our elementary classroom.
ERIC Educational Resources Information Center
Baran, Bahar; Kiliç, Eylem
2015-01-01
The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students' characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included…
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. PMID:27586067
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models.
ERIC Educational Resources Information Center
What Works Clearinghouse, 2014
2014-01-01
This study of 952 fifth and sixth graders in Washington, DC, and Alexandria, Virginia, found that students who were offered the "Higher Achievement" program had higher test scores in mathematical problem solving and were more likely to be admitted to and attend private competitive high schools. "Higher Achievement" is a…
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers. PMID:27610303
ERIC Educational Resources Information Center
Waldron, Chad H.
2008-01-01
The research study examined whether a difference existed between the reading achievement scores of an experimental group and a control group in standardized reading achievement. This difference measured the effect of systematic oral reading fluency instruction with repeated readings. Data from the 4Sight Pennsylvania Benchmark Reading Assessments…
Chuang, Li-Yeh; Yang, Cheng-Hong
2013-01-01
This study computationally determines the contribution of clinicopathologic factors correlated with 5-year survival in oral squamous cell carcinoma (OSCC) patients primarily treated by surgical operation (OP) followed by other treatments. From 2004 to 2010, the program enrolled 493 OSCC patients at the Kaohsiung Medical Hospital University. The clinicopathologic records were retrospectively reviewed and compared for survival analysis. The Apriori algorithm was applied to mine the association rules between these factors and improved survival. Univariate analysis of demographic data showed that grade/differentiation, clinical tumor size, pathology tumor size, and OP grouping were associated with survival longer than 36 months. Using the Apriori algorithm, multivariate correlation analysis identified the factors that coexistently provide good survival rates with higher lift values, such as grade/differentiation = 2, clinical stage group = early, primary site = tongue, and group = OP. Without the OP, the lift values are lower. In conclusion, this hospital-based analysis suggests that early OP and other treatments starting from OP are the key to improving the survival of OSCC patients, especially for early stage tongue cancer with moderate differentiation, having a better survival (>36 months) with varied OP approaches. PMID:23984353
ERIC Educational Resources Information Center
De Los Santos, Gilberto; Asgary, Nader; Nazemzadeh, Asghar; DeShields, Jr., Oscar W.
2005-01-01
Some projections about Hispanic individuals point to a rosy picture regarding gains in higher educational enrollment. Other studies lament that these gains are, at best, minimal. Although the so-called higher education pie is undoubtedly expanding, this article concludes that Hispanic adults are losing, rather than gaining, educational attainment…
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
Twenty-three papers on the use of higher order thinking approaches to improve basic skills education are presented. The key note article is (1) "A Case for Higher Order Thinking" (G. Garcia, Jr.). Under the heading "English Language Arts" are: (2) "Developing an Elementary Writing Program" (K. Contreras); (3) "Revision in the Writing Process" (L.…
ERIC Educational Resources Information Center
Chudowsky, Naomi; Chudowsky, Victor; Kober, Nancy
2009-01-01
This report is the first in a series of reports describing results from the Center on Education Policy's (CEP's) third annual analysis of state testing data. The report provides an update on student performance at the proficient level of achievement, and for the first time, includes data about student performance at the advanced and basic levels.…
ERIC Educational Resources Information Center
Texas Education Agency, Austin.
This volume presents 22 papers that discuss thinking in the context of subjects taught in general education, special and vocational education, educational technology, and special programs. The key note article is: (1) "A Case for Higher Order Thinking" (G. Garcia Jr.). Under the heading "Educational Technology" are: (2) "Designing a Successful…
ERIC Educational Resources Information Center
Kennedy, Gary J.
2013-01-01
This essay proposes that much of what constitutes the quality of an institution of higher education is the quality of the students attending the institution. This quality, however, is conceptualized to extend beyond that of academic ability. Specifically, three propositions are considered. First, it is proposed that a core construct of student…
ERIC Educational Resources Information Center
Briddell, Andrew
2013-01-01
This study of 1,974 fifth grade students investigated potential relationships between writing process-based instruction practices and higher-order thinking measured by a standardized literacy assessment. Writing process is defined as a highly complex, socio-cognitive process that includes: planning, text production, review, metacognition, writing…
ERIC Educational Resources Information Center
Matthews, Dewayne
2012-01-01
In 2009, Lumina Foundation officially adopted its Big Goal that 60 percent of Americans obtain a high-quality postsecondary degree or credential by 2025. That same year, Lumina began reporting on progress toward the Big Goal in a series of reports titled "A Stronger Nation through Higher Education". The core of the reports is Census data on the…
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-01-01
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-03-14
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency.
Investing in Instruction for Higher Student Achievement.
ERIC Educational Resources Information Center
Bray, Judy
2003-01-01
This policy brief presents findings from Southwest Educational Development Laboratory research on resource allocation in 1,504 independent school districts in Arkansas, Louisiana, New Mexico, and Texas. Using 5 years' data from the federal Common Core of Data and the Census Bureau along with 3 years of student performance data from each state…
High Rate Pulse Processing Algorithms for Microcalorimeters
NASA Astrophysics Data System (ADS)
Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.
2009-12-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.
Does achievement motivation mediate the semantic achievement priming effect?
Engeser, Stefan; Baumann, Nicola
2014-10-01
The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies. PMID:24820250
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Saleh, Marwan D; Eswaran, C
2012-01-01
Retinal blood vessel detection and analysis play vital roles in early diagnosis and prevention of several diseases, such as hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. This paper presents an automated algorithm for retinal blood vessel segmentation. The proposed algorithm takes advantage of powerful image processing techniques such as contrast enhancement, filtration and thresholding for more efficient segmentation. To evaluate the performance of the proposed algorithm, experiments were conducted on 40 images collected from DRIVE database. The results show that the proposed algorithm yields an accuracy rate of 96.5%, which is higher than the results achieved by other known algorithms.
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-01-01
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle. PMID:20161234
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval.
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-08-15
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle.
ERIC Educational Resources Information Center
Hartley, Tricia
2009-01-01
National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Robust facial expression recognition algorithm based on local metric learning
NASA Astrophysics Data System (ADS)
Jiang, Bin; Jia, Kebin
2016-01-01
In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
A hyperspectral images compression algorithm based on 3D bit plane transform
NASA Astrophysics Data System (ADS)
Zhang, Lei; Xiang, Libin; Zhang, Sam; Quan, Shengxue
2010-10-01
According the analyses of the hyper-spectral images, a new compression algorithm based on 3-D bit plane transform is proposed. The spectral coefficient is higher than the spatial. The algorithm is proposed to overcome the shortcoming of 1-D bit plane transform for it can only reduce the correlation when the neighboring pixels have similar values. The algorithm calculates the horizontal, vertical and spectral bit plane transform sequentially. As the spectral bit plane transform, the algorithm can be easily realized by hardware. In addition, because the calculation and encoding of the transform matrix of each bit are independent, the algorithm can be realized by parallel computing model, which can improve the calculation efficiency and save the processing time greatly. The experimental results show that the proposed algorithm achieves improved compression performance. With a certain compression ratios, the algorithm satisfies requirements of hyper-spectral images compression system, by efficiently reducing the cost of computation and memory usage.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
NASA Astrophysics Data System (ADS)
Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo
1999-05-01
This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.
Modeling Achievement by Measuring the Enacted Instruction
ERIC Educational Resources Information Center
Walkup, John R.; Jones, Ben S.
2008-01-01
This article presents a mathematical algorithm that relates student achievement with directly observable, quantifiable teacher and student behaviors, producing a modified form of the Walberg model. The algorithm (1) expands the measurable factors that comprise the quality of instruction in a linear basis of research-based teaching components and…
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Parental Involvement and Academic Achievement
ERIC Educational Resources Information Center
Goodwin, Sarah Christine
2015-01-01
This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2014-01-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210
Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees.
Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael
2012-09-01
Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis.
Graded Achievement, Tested Achievement, and Validity
ERIC Educational Resources Information Center
Brookhart, Susan M.
2015-01-01
Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…
The clinical algorithm nosology: a method for comparing algorithmic guidelines.
Pearson, S D; Margolis, C Z; Davis, S; Schreier, L K; Gottlieb, L K
1992-01-01
Concern regarding the cost and quality of medical care has led to a proliferation of competing clinical practice guidelines. No technique has been described for determining objectively the degree of similarity between alternative guidelines for the same clinical problem. The authors describe the development of the Clinical Algorithm Nosology (CAN), a new method to compare one form of guideline: the clinical algorithm. The CAN measures overall design complexity independent of algorithm content, qualitatively describes the clinical differences between two alternative algorithms, and then scores the degree of similarity between them. CAN algorithm design-complexity scores correlated highly with clinicians' estimates of complexity on an ordinal scale (r = 0.86). Five pairs of clinical algorithms addressing three topics (gallstone lithotripsy, thyroid nodule, and sinusitis) were selected for interrater reliability testing of the CAN clinical-similarity scoring system. Raters categorized the similarity of algorithm pathways in alternative algorithms as "identical," "similar," or "different." Interrater agreement was achieved on 85/109 scores (80%), weighted kappa statistic, k = 0.73. It is concluded that the CAN is a valid method for determining the structural complexity of clinical algorithms, and a reliable method for describing differences and scoring the similarity between algorithms for the same clinical problem. In the future, the CAN may serve to evaluate the reliability of algorithm development programs, and to support providers and purchasers in choosing among alternative clinical guidelines.
Higher Education or Higher Skilling?
ERIC Educational Resources Information Center
Muller, Steven
1974-01-01
Higher education may return to education for a minority, an unlikely course; concentrate on higher skilling, the road we are on today; or restore general education, the most attractive possibility, which can be implemented by restoring basic education in literacy, history, human biology, and language. (JH)
Middle Grades: Quality Teaching Equals Higher Student Achievement. Research Brief
ERIC Educational Resources Information Center
Bottoms, Gene; Hertl, Jordan; Mollette, Melinda; Patterson, Lenora
2014-01-01
The middles grades are critical to public school systems and our nation's economy. It's the make-or-break point in students' futures. Studies repeatedly show when students are not engaged and lose interest in the middle grades, they are likely to fall behind in ninth grade and later drop out of school. When this happens, the workforce suffers, and…
Time Management and Academic Achievement of Higher Secondary Students
ERIC Educational Resources Information Center
Cyril, A. Vences
2015-01-01
The only thing, which can't be changed by man, is time. One cannot get back time lost or gone Nothing can be substituted for time. Time management is actually self management. The skills that people need to manage others are the same skills that are required to manage themselves. The purpose of the present study was to explore the relation between…
Can We Achieve Our National Higher-Education Goals?
ERIC Educational Resources Information Center
Kirwan, William
2009-01-01
In several high-profile speeches this year, President Barack Obama has set an ambitious educational goal: By 2020, the United States will have the highest proportion of adults with a college degree in the world. The emphasis on education in both his proposed budget for fiscal 2010 and in the American Recovery and Reinvestment Act of 2009…
Research on algorithm for infrared hyperspectral imaging Fourier transform spectrometer technology
NASA Astrophysics Data System (ADS)
Wan, Lifang; Chen, Yan; Liao, Ningfang; Lv, Hang; He, Shufang; Li, Yasheng
2015-08-01
This paper reported the algorithm for Infrared Hyperspectral Imaging Radiometric Spectrometer Technology. Six different apodization functions are been used and compared, and the phase corrected technologies of Forman is researched and improved, fast fourier transform(FFT)is been used in this paper instead of the linear convolution to reduce the quantity of computation.The interferograms is achieved by the Infrared Hyperspectral Imaging Radiometric Spectrometer which are corrected and rebuilded by the improved algorithm, this algorithm reduce the noise and accelerate the computing speed with the higher accuracy of spectrometers.
ERIC Educational Resources Information Center
Hendrickson, Robert M.
This eighth chapter of "The Yearbook of School Law, 1986" summarizes and analyzes over 330 state and federal court cases litigated in 1985 in which institutions of higher education were involved. Among the topics examined were relationships between postsecondary institutions and various governmental agencies; discrimination in the employment of…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Gregory, Dennis E.
Decisions made by federal and state courts during 1983 concerning higher education are reported in this chapter. Issues of employment and the treatment of students underlay the bulk of the litigation. Specific topics addressed in these and other cases included federal authority to enforce regulations against age discrimination and to revoke an…
ERIC Educational Resources Information Center
Hendrickson, Robert M.
Litigation in 1987 was very brisk with an increase in the number of higher education cases reviewed. Cases discussed in this chapter are organized under four major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining and denial of employee benefits; (3)…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Finnegan, Dorothy E.
The higher education case law in 1988 is extensive. Cases discussed in this chapter are organized under five major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining, and denial of employee benefits; (3) students, involving admissions, financial aid, First…
ERIC Educational Resources Information Center
Bok, Derek
Factors that distinguish the United States higher education system and its performance are considered, with attention to new developments, propsects for change, undergraduate education, and professional schools (especially law, business, and medicine). The way universities change the methods and content of their teaching in response to new…
Optical rate sensor algorithms
NASA Astrophysics Data System (ADS)
Uhde-Lacovara, Jo A.
1989-12-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Parallel algorithms and architectures for the manipulator inertia matrix
Amin-Javaheri, M.
1989-01-01
Several parallel algorithms and architectures to compute the manipulator inertia matrix in real time are proposed. An O(N) and an O(log{sub 2}N) parallel algorithm based upon recursive computation of the inertial parameters of sets of composite rigid bodies are formulated. One- and two-dimensional systolic architectures are presented to implement the O(N) parallel algorithm. A cube architecture is employed to implement the diagonal element of the inertia matrix in O(log{sub 2}N) time and the upper off-diagonal elements in O(N) time. The resulting K{sub 1}O(N) + K{sub 2}O(log{sub 2}N) parallel algorithm is more efficient for a cube network implementation. All the architectural configurations are based upon a VLSI Robotics Processor exploiting fine-grain parallelism. In evaluation all the architectural configurations, significant performance parameters such as I/O time and idle time due to processor synchronization as well as CPU utilization and on-chip memory size are fully included. The O(N) and O(log{sub 2}N) parallel algorithms adhere to the precedence relationships among the processors. In order to achieve a higher speedup factor; however, parallel algorithms in conjunction with Non-Strict Computational Models are devised to relax interprocess precedence, and as a result, to decrease the effective computational delays. The effectiveness of the Non-strict Computational Algorithms is verified by computer simulations, based on a PUMA 560 robot manipulator. It is demonstrated that a combination of parallel algorithms and architectures results in a very effective approach to achieve real-time response for computing the manipulator inertia matrix.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
Interpolation algorithms for machine tools
Burleson, R.R.
1981-08-01
There are three types of interpolation algorithms presently used in most numerical control systems: digital differential analyzer, pulse-rate multiplier, and binary-rate multiplier. A method for higher order interpolation is in the experimental stages. The trends point toward the use of high-speed micrprocessors to perform these interpolation algorithms.
Pedestrian navigation algorithm based on MIMU with building heading/magnetometer
NASA Astrophysics Data System (ADS)
Meng, Xiang-bin; Pan, Xian-fei; Chen, Chang-hao; Hu, Xiao-ping
2016-01-01
In order to improve the accuracy of the low-cost MIMU Inertial navigation system in the application of pedestrian navigation.And to reduce the effect of the heading error because of the low accuracy of the component of MIMU. A novel algorithm was put forward, which fusing the building heading constraint information and the magnetic heading information to achieve more advantages. We analysed the application condition and the modified effect of building heading and magnetic heading. Then experiments were conducted in indoor environment. The results show that the algorithm proposed has a better effect to restrict the heading drift problem and to achieve a higher navigation precision.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
A novel image encryption algorithm using chaos and reversible cellular automata
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luan, Dapeng
2013-11-01
In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
Distributed edge detection algorithm based on wavelet transform for wireless video sensor network
NASA Astrophysics Data System (ADS)
Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng
2010-12-01
Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.
Distributed edge detection algorithm based on wavelet transform for wireless video sensor network
NASA Astrophysics Data System (ADS)
Li, Qiulin; Hao, Qun; Song, Yong; Wang, Dongsheng
2011-05-01
Edge detection algorithms are critical to image processing and computer vision. Traditional edge detection algorithms are not suitable for wireless video sensor network (WVSN) in which the nodes are with in limited calculation capability and resources. In this paper, a distributed edge detection algorithm based on wavelet transform designed for WVSN is proposed. Wavelet transform decompose the image into several parts, then the parts are assigned to different nodes through wireless network separately. Each node performs sub-image edge detecting algorithm correspondingly, all the results are sent to sink node, Fusing and Synthesis which include image binary and edge connect are executed in it. And finally output the edge image. Lifting scheme and parallel distributed algorithm are adopted to improve the efficiency, simultaneously, decrease the computational complexity. Experimental results show that this method could achieve higher efficiency and better result.
Network representations of knowledge about chemical equilibrium: Variations with achievement
NASA Astrophysics Data System (ADS)
Wilson, Janice M.
This study examined variation in the organization of domain-specific knowledge by 50 Year-12 chemistry students and 4 chemistry teachers. The study used nonmetric multidimensional scaling (MDS) and the Pathfinder network-generating algorithm to investigate individual and group differences in student concepts maps about chemical equilibrium. MDS was used to represent the individual maps in two-dimensional space, based on the presence or absence of paired propositional links. The resulting separation between maps reflected degree of hierarchical structure, but also reflected independent measures of student achievement. Pathfinder was then used to produce semantic networks from pooled data from high and low achievement groups using proximity matrices derived from the frequencies of paired concepts. The network constructed from maps of higher achievers (coherence measure = 0.18, linked pairs = 294, and number of subjects = 32) showed greater coherence, more concordance in specific paired links, more important specific conceptual relationships, and greater hierarchical organization than did the network constructed from maps of lower achievers (coherence measure = 0.12, linked pairs = 552, and number of subjects = 22). These differences are interpreted in terms of qualitative variation in knowledge organization by two groups of individuals with different levels of relative expertise (as reflected in achievement scores) concerning the topic of chemical equilibrium. The results suggest that the technique of transforming paired links in concept maps into proximity matrices for input to multivariate analyses provides a suitable methodology for comparing and documenting changes in the organization and structure of conceptual knowledge within and between individual students.
An ellipse detection algorithm based on edge classification
NASA Astrophysics Data System (ADS)
Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan
2015-12-01
In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
Comparing Science Achievement Constructs: Targeted and Achieved
ERIC Educational Resources Information Center
Ferrara, Steve; Duncan, Teresa
2011-01-01
This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…
Mobility and Reading Achievement.
ERIC Educational Resources Information Center
Waters, Theresa Z.
A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…
Comprehensive eye evaluation algorithm
NASA Astrophysics Data System (ADS)
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Achieving energy efficiency during collective communications
Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao
2012-09-13
Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although little attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run
Higher Education Space: Future Directions
ERIC Educational Resources Information Center
Temple, Paul; Barnett, Ronald
2007-01-01
This paper reports on a study of changing demands for space in United Kingdom (UK) higher education. Physical spaces that universities require are related to their functions in complex ways, and the connections between space and academic performance are not well understood. No simple algorithm can calculate a single university's space needs, but a…
Image enhancement based on edge boosting algorithm
NASA Astrophysics Data System (ADS)
Ngernplubpla, Jaturon; Chitsobhuk, Orachat
2015-12-01
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
Algorithms, games, and evolution.
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-07-22
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: "What algorithm could possibly achieve all this in a mere three and a half billion years?" In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to
MLP iterative construction algorithm
NASA Astrophysics Data System (ADS)
Rathbun, Thomas F.; Rogers, Steven K.; DeSimio, Martin P.; Oxley, Mark E.
1997-04-01
The MLP Iterative Construction Algorithm (MICA) designs a Multi-Layer Perceptron (MLP) neural network as it trains. MICA adds Hidden Layer Nodes one at a time, separating classes on a pair-wise basis, until the data is projected into a linear separable space by class. Then MICA trains the Output Layer Nodes, which results in an MLP that achieves 100% accuracy on the training data. MICA, like Backprop, produces an MLP that is a minimum mean squared error approximation of the Bayes optimal discriminant function. Moreover, MICA's training technique yields novel feature selection technique and hidden node pruning technique
Strategic Planning for Higher Education.
ERIC Educational Resources Information Center
Kotler, Philip; Murphy, Patrick E.
1981-01-01
The framework necessary for achieving a strategic planning posture in higher education is outlined. The most important benefit of strategic planning for higher education decision makers is that it forces them to undertake a more market-oriented and systematic approach to long- range planning. (Author/MLW)
General Achievement Trends: Oklahoma
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Georgia
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Nebraska
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Arkansas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Maryland
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Maine
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Iowa
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Texas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Hawaii
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Kansas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Florida
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Massachusetts
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Tennessee
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Alabama
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Virginia
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Michigan
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Colorado
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
Inverting the Achievement Pyramid
ERIC Educational Resources Information Center
White-Hood, Marian; Shindel, Melissa
2006-01-01
Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.
The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…
School Effects on Achievement.
ERIC Educational Resources Information Center
Nichols, Robert C.
The New York State Education Department conducts a Pupil Evaluation Program (PEP) in which each year all third, sixth, and ninth grade students in the state are given a series of achievement tests in reading and mathematics. The data accumulated by the department includes achievement test scores, teacher characteristics, building and curriculum…
Heritability of Creative Achievement
ERIC Educational Resources Information Center
Piffer, Davide; Hur, Yoon-Mi
2014-01-01
Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…
Confronting the Achievement Gap
ERIC Educational Resources Information Center
Gardner, David
2007-01-01
This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…
ERIC Educational Resources Information Center
Abowitz, Kathleen Knight
2011-01-01
Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…
Multikernel least mean square algorithm.
Tobar, Felipe A; Kung, Sun-Yuan; Mandic, Danilo P
2014-02-01
The multikernel least-mean-square algorithm is introduced for adaptive estimation of vector-valued nonlinear and nonstationary signals. This is achieved by mapping the multivariate input data to a Hilbert space of time-varying vector-valued functions, whose inner products (kernels) are combined in an online fashion. The proposed algorithm is equipped with novel adaptive sparsification criteria ensuring a finite dictionary, and is computationally efficient and suitable for nonstationary environments. We also show the ability of the proposed vector-valued reproducing kernel Hilbert space to serve as a feature space for the class of multikernel least-squares algorithms. The benefits of adaptive multikernel (MK) estimation algorithms are illuminated in the nonlinear multivariate adaptive prediction setting. Simulations on nonlinear inertial body sensor signals and nonstationary real-world wind signals of low, medium, and high dynamic regimes support the approach. PMID:24807027
A hybrid algorithm for blind source separation of a convolutive mixture of three speech sources
NASA Astrophysics Data System (ADS)
Minhas, Shahab Faiz; Gaydecki, Patrick
2014-12-01
In this paper we present a novel hybrid algorithm for blind source separation of three speech signals in a real room environment. The algorithm in addition to using second-order statistics also exploits an information-theoretic approach, based on higher order statistics, to achieve source separation and is well suited for real-time implementation due to its fast adaptive methodology. It does not require any prior information or parameter estimation. The algorithm also uses a novel post-separation speech harmonic alignment that results in an improved performance. Experimental results in simulated and real environments verify the effectiveness of the proposed method, and analysis demonstrates that the algorithm is computationally efficient.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Wohak, M.G.; Beer, H.
1998-05-08
A contribution toward the full numerical simulation of direct-contact evaporation of a drop rising in a hot, immiscible and less volatile liquid of higher density is presented. Based on a fixed-grid Eulerian description, the classical SOLA-VOF method is largely extended to incorporate, for example, three incompressible fluids and liquid-vapor phase change. The thorough validation and assessment process covers several benchmark simulations, some which are presented, documenting the multipurpose value of the new code. The direct-contact evaporation simulations reveal severe numerical problems that are closely related to the fixed-grid Euler formulation. As a consequence, the comparison to experiments have to be limited to the initial stage. Potential applications using several design variations can be found in waste heat recovery and reactor cooling. Furthermore, direct contact evaporators may be used in such geothermal power plants where the brines cannot be directly fed into a turbine either because of a high salt load causing severe fouling and corrosion or because of low steam fraction.
NASA Technical Reports Server (NTRS)
Chou, Jin
1993-01-01
Rational Bezier and B-spline representations of circles have been heavily publicized. However, all the literature assumes the rational Bezier segments in the homogeneous space are both planar and (equivalent to) quadratic. This creates the illusion that circles can only be achieved by planar and quadratic curves. Circles that are formed by higher order rational Bezier curves which are nonplanar in the homogeneous space are shown. The problem of whether it is possible to represent a complete circle with one Bezier curve is investigated. In addition, some other interesting properties of cubic Bezier arcs are discussed.
A synthesized heuristic task scheduling algorithm.
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance.
Student Achievement and Motivation
ERIC Educational Resources Information Center
Flammer, Gordon H.; Mecham, Robert C.
1974-01-01
Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Maryland's Achievements in Public Education, 2011
ERIC Educational Resources Information Center
Maryland State Department of Education, 2011
2011-01-01
This report presents Maryland's achievements in public education for 2011. Maryland's achievements include: (1) Maryland's public schools again ranked #1 in the nation in Education Week's 2011 Quality Counts annual report; (2) Maryland ranked 1st nationwide for a 3rd year in a row in the percentage of public school students scoring 3 or higher on…
Mathematics Coursework Regulates Growth in Mathematics Achievement
ERIC Educational Resources Information Center
Ma, Xin; Wilkins, Jesse L. M.
2007-01-01
Using data from the Longitudinal Study of American Youth (LSAY), we examined the extent to which students' mathematics coursework regulates (influences) the rate of growth in mathematics achievement during middle and high school. Graphical analysis showed that students who started middle school with higher achievement took individual mathematics…
Efficient algorithms for numerical simulation of the motion of earth satellites
NASA Astrophysics Data System (ADS)
Bordovitsyna, T. V.; Bykova, L. E.; Kardash, A. V.; Fedyaev, Yu. A.; Sharkovskii, N. A.
1992-08-01
We briefly present our results obtained during the development and an investigation of the efficacy of algorithms for numerical prediction of the motion of earth satellites (ESs) using computers of different power. High accuracy and efficiency in predicting ES motion are achieved by using higher-order numerical methods, transformations that regularize and stabilize the equations of motion, and a high-precision model of the forces acting on an ES. This approach enables us to construct efficient algorithms of the required accuracy, both for universal computers with a large RAM and for personal computers with very limited capacity.
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
Efficient implementations of hyperspectral chemical-detection algorithms
NASA Astrophysics Data System (ADS)
Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.
2013-10-01
Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
NASA Astrophysics Data System (ADS)
Evertz, Hans Gerd
1998-03-01
Exciting new investigations have recently become possible for strongly correlated systems of spins, bosons, and fermions, through Quantum Monte Carlo simulations with the Loop Algorithm (H.G. Evertz, G. Lana, and M. Marcu, Phys. Rev. Lett. 70, 875 (1993).) (For a recent review see: H.G. Evertz, cond- mat/9707221.) and its generalizations. A review of this new method, its generalizations and its applications is given, including some new results. The Loop Algorithm is based on a formulation of physical models in an extended ensemble of worldlines and graphs, and is related to Swendsen-Wang cluster algorithms. It performs nonlocal changes of worldline configurations, determined by local stochastic decisions. It overcomes many of the difficulties of traditional worldline simulations. Computer time requirements are reduced by orders of magnitude, through a corresponding reduction in autocorrelations. The grand-canonical ensemble (e.g. varying winding numbers) is naturally simulated. The continuous time limit can be taken directly. Improved Estimators exist which further reduce the errors of measured quantities. The algorithm applies unchanged in any dimension and for varying bond-strengths. It becomes less efficient in the presence of strong site disorder or strong magnetic fields. It applies directly to locally XYZ-like spin, fermion, and hard-core boson models. It has been extended to the Hubbard and the tJ model and generalized to higher spin representations. There have already been several large scale applications, especially for Heisenberg-like models, including a high statistics continuous time calculation of quantum critical exponents on a regularly depleted two-dimensional lattice of up to 20000 spatial sites at temperatures down to T=0.01 J.
Achieving Diversity in Academia: A Dream Deferred?
ERIC Educational Resources Information Center
Leonard, Jacqueline; Horvat, Erin McNamara; Riley-Tillman, T. Chris
Attempts to achieve diversity in the faculty in institutions of higher education have increased in recent years. Despite these attempts, faculty of color and women are still underrepresented in the higher ranks. This paper presents autobiographies focusing on the career trajectories of three junior faculty members at one institution: a divorced…
New algorithms for binary wavefront optimization
NASA Astrophysics Data System (ADS)
Zhang, Xiaolong; Kner, Peter
2015-03-01
Binary amplitude modulation promises to allow rapid focusing through strongly scattering media with a large number of segments due to the faster update rates of digital micromirror devices (DMDs) compared to spatial light modulators (SLMs). While binary amplitude modulation has a lower theoretical enhancement than phase modulation, the faster update rate should more than compensate for the difference - a factor of π2 /2. Here we present two new algorithms, a genetic algorithm and a transmission matrix algorithm, for optimizing the focus with binary amplitude modulation that achieve enhancements close to the theoretical maximum. Genetic algorithms have been shown to work well in noisy environments and we show that the genetic algorithm performs better than a stepwise algorithm. Transmission matrix algorithms allow complete characterization and control of the medium but require phase control either at the input or output. Here we introduce a transmission matrix algorithm that works with only binary amplitude control and intensity measurements. We apply these algorithms to binary amplitude modulation using a Texas Instruments Digital Micromirror Device. Here we report an enhancement of 152 with 1536 segments (9.90%×N) using a genetic algorithm with binary amplitude modulation and an enhancement of 136 with 1536 segments (8.9%×N) using an intensity-only transmission matrix algorithm.
Image watermarking using a dynamically weighted fuzzy c-means algorithm
NASA Astrophysics Data System (ADS)
Kang, Myeongsu; Ho, Linh Tran; Kim, Yongmin; Kim, Cheol Hong; Kim, Jong-Myon
2011-10-01
Digital watermarking has received extensive attention as a new method of protecting multimedia content from unauthorized copying. In this paper, we present a nonblind watermarking system using a proposed dynamically weighted fuzzy c-means (DWFCM) technique combined with discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) techniques for copyright protection. The proposed scheme efficiently selects blocks in which the watermark is embedded using new membership values of DWFCM as the embedding strength. We evaluated the proposed algorithm in terms of robustness against various watermarking attacks and imperceptibility compared to other algorithms [DWT-DCT-based and DCT- fuzzy c-means (FCM)-based algorithms]. Experimental results indicate that the proposed algorithm outperforms other algorithms in terms of robustness against several types of attacks, such as noise addition (Gaussian noise, salt and pepper noise), rotation, Gaussian low-pass filtering, mean filtering, median filtering, Gaussian blur, image sharpening, histogram equalization, and JPEG compression. In addition, the proposed algorithm achieves higher values of peak signal-to-noise ratio (approximately 49 dB) and lower values of measure-singular value decomposition (5.8 to 6.6) than other algorithms.
ERIC Educational Resources Information Center
Ohrn, Deborah Gore, Ed.
1993-01-01
This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…
Achieving Peace through Education.
ERIC Educational Resources Information Center
Clarken, Rodney H.
While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…
Increasing Male Academic Achievement
ERIC Educational Resources Information Center
Jackson, Barbara Talbert
2008-01-01
The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…
Leadership Issues: Raising Achievement.
ERIC Educational Resources Information Center
Horsfall, Chris, Ed.
This document contains five papers examining the meaning and operation of leadership as a variable affecting student achievement in further education colleges in the United Kingdom. "Introduction" (Chris Horsfall) discusses school effectiveness studies' findings regarding the relationship between leadership and effective schools, distinguishes…
ERIC Educational Resources Information Center
Goodwin, MacArthur
2000-01-01
Focuses on policy issues that have affected arts education in the twentieth century, such as: interest in discipline-based arts education, influence of national arts associations, and national standards and coordinated assessment. States that whether the policy decisions are viewed as achievements or disasters are for future determination. (CMK)
ERIC Educational Resources Information Center
Napier, Rod; Sanaghan, Patrick
2002-01-01
Uses the example of Vermont's Middlebury College to explore the challenges and possibilities of achieving consensus about institutional change. Discusses why, unlike in this example, consensus usually fails, and presents four demands of an effective consensus process. Includes a list of "test" questions on successful collaboration. (EV)
School Students' Science Achievement
ERIC Educational Resources Information Center
Shymansky, James; Wang, Tzu-Ling; Annetta, Leonard; Everett, Susan; Yore, Larry D.
2013-01-01
This paper is a report of the impact of an externally funded, multiyear systemic reform project on students' science achievement on a modified version of the Third International Mathematics and Science Study (TIMSS) test in 33 small, rural school districts in two Midwest states. The systemic reform effort utilized a cascading leadership strategy…
Essays on Educational Achievement
ERIC Educational Resources Information Center
Ampaabeng, Samuel Kofi
2013-01-01
This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…
Assessing Handwriting Achievement.
ERIC Educational Resources Information Center
Ediger, Marlow
Teachers in the school setting need to emphasize quality handwriting across the curriculum. Quality handwriting means that the written content is easy to read in either manuscript or cursive form. Handwriting achievement can be assessed, but not compared to the precision of assessing basic addition, subtraction, multiplication, and division facts.…
Intelligence and Educational Achievement
ERIC Educational Resources Information Center
Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres
2007-01-01
This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…
Explorations in achievement motivation
NASA Technical Reports Server (NTRS)
Helmreich, Robert L.
1982-01-01
Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.
ERIC Educational Resources Information Center
Bracey, Gerald W.
2008-01-01
In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the potential "Robin Hood…
INTELLIGENCE, PERSONALITY AND ACHIEVEMENT.
ERIC Educational Resources Information Center
MUIR, R.C.; AND OTHERS
A LONGITUDINAL DEVELOPMENTAL STUDY OF A GROUP OF MIDDLE CLASS CHILDREN IS DESCRIBED, WITH EMPHASIS ON A SEGMENT OF THE RESEARCH INVESTIGATING THE RELATIONSHIP OF ACHIEVEMENT, INTELLIGENCE, AND EMOTIONAL DISTURBANCE. THE SUBJECTS WERE 105 CHILDREN AGED FIVE TO 6.3 ATTENDING TWO SCHOOLS IN MONTREAL. EACH CHILD WAS ASSESSED IN THE AREAS OF…
SALT and Spelling Achievement.
ERIC Educational Resources Information Center
Nelson, Joan
A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…
Appraising Reading Achievement.
ERIC Educational Resources Information Center
Ediger, Marlow
To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…
An Experimental Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
Feedback algorithm for simulation of multi-segmented cracks
Chady, T.; Napierala, L.
2011-06-23
In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.
[Decision on the rational algorithm in treatment of kidney cysts].
Antonov, A V; Ishutin, E Iu; Guliev, R N
2012-01-01
The article presents an algorithm of diagnostics and treatment of renal cysts and other liquid neoplasms of the retroperitoneal space on an analysis of 270 case histories. The algorithm takes into account the achievements of modern medical technologies developed in the recent years. The application of the proposed algorithm must elevate efficiency of the diagnosis and quality of treatment of patients with renal cysts.
Fast training algorithms for multilayer neural nets.
Brent, R P
1991-01-01
An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.
1997-06-13
Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.
A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389
Liu, Dong-sheng; Fan, Shu-jiang
2014-01-01
In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.
Evaluation of RADAP II Severe-Storm-Detection Algorithms.
NASA Astrophysics Data System (ADS)
Winston, Herb A.; Ruthi, Larry J.
1986-02-01
Computer-generated volumetric radar algorithms have been available at a few operational National Weather Service sites since the mid-1970s under the Digitized Radar Experiment (D/RADFX) and Radar Data Processor (RADAP II) programs. The algorithms were first used extensively for severe-storm warnings at the Oklahoma City National Weather Service Forecast Office (WSFO OKC) in 1983. RADAP IT performance in operational severe-weather forecasting was evaluated using objectively derived warnings based on computer-generated output. Statistical scores of probability of detection, false-alarm rate, and critical-success index for the objective warnings were found to be significantly higher than the average statistical scares reported for National Weather Service warnings. Even higher statistical scores were achieved by experienced forecasters using RADAP II in addition to conventional data during the 1983 severe-storm season at WSFO OKC. This investigation lends further support to the suggestion that incorporating improved reflectivity-based algorithms with Doppler into the future Advanced Weather Interactive Processing System for the 1990s (AWIPS-90) or the Next Generation Weather Radar (NEXRAD) system should greatly enhance severe-storm-detection capabilities.
Asian Americans and Higher Education.
ERIC Educational Resources Information Center
Endo, Russell
1980-01-01
Problems that Asian Americans face in higher education include poor communications skills; stress resulting from family and community pressure to achieve; and universities' reluctance to hire Asian American staff, recruit and provide financial support for Asian American students, and provide relevant curriculum. Various programs have begun to…
2011 Higher Education Sustainability Review
ERIC Educational Resources Information Center
Wagner, Margo, Ed.
2012-01-01
Looking through the lens of AASHE Bulletin stories in 2011, this year's review reveals an increased focus on higher education access, affordability, and success; more green building efforts than ever before; and growing campus-community engagement on food security, among many other achievements. Contributors include James Applegate (Lumina…
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
An Efficient Reachability Analysis Algorithm
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2008-01-01
A document discusses a new algorithm for generating higher-order dependencies for diagnostic and sensor placement analysis when a system is described with a causal modeling framework. This innovation will be used in diagnostic and sensor optimization and analysis tools. Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in-situ platforms. This algorithm will serve as a power tool for technologies that satisfy a key requirement of autonomous spacecraft, including science instruments and in-situ missions.
Algorithmic Mechanism Design of Evolutionary Computation
Pei, Yan
2015-01-01
We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777
Higher harmonic generation microscopy.
Sun, Chi-Kuang
2005-01-01
Higher harmonic-generation, including second harmonic generation and third harmonic generation, leaves no energy deposition to the interacted matters due to its virtual-level transition characteristic, providing a truly non-invasive modality and is ideal for in vivo imaging of live specimens without any preparation. Second harmonic generation microscopy provides images on stacked membranes and arranged proteins with organized nano-structures due to the bio-photonic crystalline effect. Third harmonic generation microscopy provides general cellular or subcellular interface imaging due to optical inhomogeneity. Due to their virtual-transition nature, no saturation or bleaching in the generated signal is expected. With no energy release, continuous viewing without compromising sample viability can thus be achieved. Combined with its nonlinearity, higher harmonic generation microscopy provides sub-micron three-dimensional sectioning capability and millimeter penetration in live samples without using fluorescence and exogenous markers, offering morphological, structural, functional, and cellular information of biomedical specimens without modifying their natural biological and optical environments.
Library of Continuation Algorithms
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
NASA Technical Reports Server (NTRS)
Fatemi, Emad; Jerome, Joseph; Osher, Stanley
1989-01-01
A micron n+ - n - n+ silicon diode is simulated via the hydrodynamic model for carrier transport. The numerical algorithms employed are for the non-steady case, and a limiting process is used to reach steady state. The simulation employs shock capturing algorithms, and indeed shocks, or very rapid transition regimes, are observed in the transient case for the coupled system, consisting of the potential equation and the conservation equations describing charge, momentum, and energy transfer for the electron carriers. These algorithms, termed essentially non-oscillatory, were successfully applied in other contexts to model the flow in gas dynamics, magnetohydrodynamics, and other physical situations involving the conservation laws in fluid mechanics. The method here is first order in time, but the use of small time steps allows for good accuracy. Runge-Kutta methods allow one to achieve higher accuracy in time if desired. The spatial accuracy is of high order in regions of smoothness.
Tensor network algorithm by coarse-graining tensor renormalization on finite periodic lattices
NASA Astrophysics Data System (ADS)
Zhao, Hui-Hai; Xie, Zhi-Yuan; Xiang, Tao; Imada, Masatoshi
2016-03-01
We develop coarse-graining tensor renormalization group algorithms to compute physical properties of two-dimensional lattice models on finite periodic lattices. Two different coarse-graining strategies, one based on the tensor renormalization group and the other based on the higher-order tensor renormalization group, are introduced. In order to optimize the tensor network model globally, a sweeping scheme is proposed to account for the renormalization effect from the environment tensors under the framework of second renormalization group. We demonstrate the algorithms by the classical Ising model on the square lattice and the Kitaev model on the honeycomb lattice, and show that the finite-size algorithms achieve substantially more accurate results than the corresponding infinite-size ones.
Music training and mathematics achievement.
Cheek, J M; Smith, L R
1999-01-01
Iowa Tests of Basic Skills (ITBS) mathematics scores of eighth graders who had received music instruction were compared according to whether the students were given private lessons. Comparisons also were made between students whose lessons were on the keyboard versus other music lessons. Analyses indicated that students who had private lessons for two or more years performed significantly better on the composite mathematics portion of the ITBS than did students who did not have private lessons. In addition, students who received lessons on the keyboard had significantly higher ITBS mathematics scores than did students whose lessons did not involve the keyboard. These results are discussed in relation to previous research on music training and mathematics achievement.
Leadership, self-efficacy, and student achievement
NASA Astrophysics Data System (ADS)
Grayson, Kristin
This study examined the relationships between teacher leadership, science teacher self-efficacy, and fifth-grade science student achievement in diverse schools in a San Antonio, Texas, metropolitan school district. Teachers completed a modified version of the Leadership Behavior Description Question (LBDQ) Form XII by Stogdill (1969), the Science Efficacy and Belief Expectations for Science Teaching (SEBEST) by Ritter, Boone, and Rubba (2001, January). Students' scores on the Texas Assessment of Knowledge and Skills (TAKS) measured fifth-grade science achievement. At the teacher level of analysis multiple regressions showed the following relationships between teachers' science self-efficacy and teacher classroom leadership behaviors and the various teacher and school demographic variables. Predictors of teacher self efficacy beliefs included teacher's level of education, gender, and leadership initiating structure. The only significant predictor of teacher self-efficacy outcome expectancy was gender. Higher teacher self-efficacy beliefs predicted higher leadership initiating structure. At the school level of analysis, higher school levels of percentage of students from low socio-economic backgrounds and higher percentage of limited English proficient students predicted lower school student mean science achievement. These findings suggest a need for continued research to clarify relationships between teacher classroom leadership, science teacher self-efficacy, and student achievement especially at the teacher level of analysis. Findings also indicate the importance of developing instructional methods to address student demographics and their needs so that all students, despite their backgrounds, will achieve in science.
A multilevel system of algorithms for detecting and isolating signals in a background of noise
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Tsoy, K. A.
1978-01-01
Signal information is processed with the help of algorithms, and then on the basis of such processing, a part of the information is subjected to further processing with the help of more precise algorithms. Such a system of algorithms is studied, a comparative evaluation of a series of lower level algorithms is given, and the corresponding algorithms of higher level are characterized.
A novel chaos danger model immune algorithm
NASA Astrophysics Data System (ADS)
Xu, Qingyang; Wang, Song; Zhang, Li; Liang, Ying
2013-11-01
Making use of ergodicity and randomness of chaos, a novel chaos danger model immune algorithm (CDMIA) is presented by combining the benefits of chaos and danger model immune algorithm (DMIA). To maintain the diversity of antibodies and ensure the performances of the algorithm, two chaotic operators are proposed. Chaotic disturbance is used for updating the danger antibody to exploit local solution space, and the chaotic regeneration is referred to the safe antibody for exploring the entire solution space. In addition, the performances of the algorithm are examined based upon several benchmark problems. The experimental results indicate that the diversity of the population is improved noticeably, and the CDMIA exhibits a higher efficiency than the danger model immune algorithm and other optimization algorithms.
Bradburne, John; Patton, Tisha C.
2001-02-25
When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
NASA Astrophysics Data System (ADS)
Peña, M.
2016-10-01
Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.
Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na
2015-01-01
Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms. PMID:26404291
Achieving ultra-high temperatures with a resistive emitter array
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott
2016-05-01
The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.
Improved autonomous star identification algorithm
NASA Astrophysics Data System (ADS)
Luo, Li-Yan; Xu, Lu-Ping; Zhang, Hua; Sun, Jing-Rong
2015-06-01
The log-polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. Project supported by the National Natural Science Foundation of China (Grant Nos. 61172138 and 61401340), the Open Research Fund of the Academy of Satellite Application, China (Grant No. 2014_CXJJ-DH_12), the Fundamental Research Funds for the Central Universities, China (Grant Nos. JB141303 and 201413B), the Natural Science Basic Research Plan in Shaanxi Province, China (Grant No. 2013JQ8040), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130203120004), and the Xi’an Science and Technology Plan, China (Grant. No CXY1350(4)).
Achievement Goals and Achievement Emotions: A Meta-Analysis
ERIC Educational Resources Information Center
Huang, Chiungjung
2011-01-01
This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…
The Virginia Plan for Higher Education, 1989.
ERIC Educational Resources Information Center
Virginia State Council of Higher Education, Richmond.
The Council of Higher Education, in this state-mandated biennial plan, sets four goals for Virginia's state-supported system of higher education to achieve: access, excellence, accountability, and placement among the best systems of higher education in the United States. The plan concentrates on the 84 degree-granting institutions that have been…
HEATR project: ATR algorithm parallelization
NASA Astrophysics Data System (ADS)
Deardorf, Catherine E.
1998-09-01
High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.
Emissivity spectra estimated with the MaxEnTES algorithm
NASA Astrophysics Data System (ADS)
Barducci, A.; Guzzi, D.; Lastri, C.; Nardino, V.; Pippi, I.; Raimondi, V.
2014-10-01
Temperature and Emissivity Separation (TES) applied to multispectral or hyperspectral Thermal Infrared (TIR) images of the Earth is a relevant issue for many remote sensing applications. The TIR spectral radiance can be modeled by means of the well-known Planck's law, as a function of the target temperature and emissivity. The estimation of these target's parameters (i.e. the Temperature Emissivity Separation, aka TES) is hindered by the circumstance that the number of measurements is less than the unknown number. Existing TES algorithms implement a temperature estimator in which the uncertainty is removed by adopting some a priori assumption that conditions the retrieved temperature and emissivity. Due to its mathematical structure, the Maximum Entropy formalism (MaxEnt) seems to be well suited for carrying out this complex TES operation. The main advantage of the MaxEnt statistical inference is the absence of any external hypothesis, which is instead characterizes most of the existing the TES algorithms. In this paper we describe the performance of the MaxEnTES (Maximum Entropy Temperature Emissivity Separation) algorithm as applied to ten TIR spectral channels of a MIVIS dataset collected over Italy. We compare the temperature and emissivity spectra estimated by this algorithm with independent estimations achieved with two previous TES methods (the Grey Body Emissivity (GBE), and the Model Emittance Calculation (MEC)). We show that MaxEnTES is a reliable algorithm in terms of its higher output Signal-to-Noise Ratio and the negligibility of systematic errors that bias the estimated temperature in other TES procedures.
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm. PMID:22654052
Quantum algorithms for quantum field theories.
Jordan, Stephen P; Lee, Keith S M; Preskill, John
2012-06-01
Quantum field theory reconciles quantum mechanics and special relativity, and plays a central role in many areas of physics. We developed a quantum algorithm to compute relativistic scattering probabilities in a massive quantum field theory with quartic self-interactions (φ(4) theory) in spacetime of four and fewer dimensions. Its run time is polynomial in the number of particles, their energy, and the desired precision, and applies at both weak and strong coupling. In the strong-coupling and high-precision regimes, our quantum algorithm achieves exponential speedup over the fastest known classical algorithm.
Entrepreneur achievement. Liaoning province.
Zhao, R
1994-03-01
This paper reports the successful entrepreneurial endeavors of members of a 20-person women's group in Liaoning Province, China. Jing Yuhong, a member of the Family Planning Association at Shileizi Village, Dalian City, provided the basis for their achievements by first building an entertainment/study room in her home to encourage married women to learn family planning. Once stocked with books, magazines, pamphlets, and other materials on family planning and agricultural technology, dozens of married women in the neighborhood flocked voluntarily to the room. Yuhong also set out to give these women a way to earn their own income as a means of helping then gain greater equality with their husbands and exert greater control over their personal reproductive and social lives. She gave a section of her farming land to the women's group, loaned approximately US$5200 to group members to help them generate income from small business initiatives, built a livestock shed in her garden for the group to raise marmots, and erected an awning behind her house under which mushrooms could be grown. The investment yielded $12,000 in the first year, allowing each woman to keep more than $520 in dividends. Members then soon began going to fairs in the capital and other places to learn about the outside world, and have successfully ventured out on their own to generate individual incomes. Ten out of twenty women engaged in these income-generating activities asked for and got the one-child certificate.
HEPEX - achievements and challenges!
NASA Astrophysics Data System (ADS)
Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan
2014-05-01
HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.
The Homogeneity of School Achievement.
ERIC Educational Resources Information Center
Cahan, Sorel
Since the measurement of school achievement involves the administration of achievement tests to various grades on various subjects, both grade level and subject matter contribute to within-school achievement variations. To determine whether achievement test scores vary most among different fields within a grade level, or within fields among…
Motivation and academic achievement in medical students
Yousefy, Alireza; Ghassemi, Gholamreza; Firouznia, Samaneh
2012-01-01
Background: Despite their ascribed intellectual ability and achieved academic pursuits, medical students’ academic achievement is influenced by motivation. This study is an endeavor to examine the role of motivation in the academic achievement of medical students. Materials and Methods: In this cross-sectional correlational study, out of the total 422 medical students, from 4th to final year during the academic year 2007–2008, at School of Medicine, Isfahan University of Medical Sciences, 344 participated in completion of the Inventory of School Motivation (ISM), comprising 43 items and measuring eight aspects of motivation. The gold standard for academic achievement was their average academic marks at pre-clinical and clinical levels. Data were computer analyzed by running a couple of descriptive and analytical tests including Pearson Correlation and Student's t-student. Results: Higher motivation scores in areas of competition, effort, social concern, and task were accompanied by higher average marks at pre-clinical as well as clinical levels. However, the latter ones showed greater motivation for social power as compared to the former group. Task and competition motivation for boys was higher than for girls. Conclusion: In view of our observations, students’ academic achievement requires coordination and interaction between different aspects of motivation. PMID:23555107
Achievement in Boys' Schools 2010-12
ERIC Educational Resources Information Center
Wylie, Cathy; Berg, Melanie
2014-01-01
This report explores the achievement of school leavers from state and state-integrated boys' schools. The analysis from 2010 to 2012 shows school leavers from state boys' schools had higher qualifications than their male counterparts who attended state co-educational schools. The research was carried out for the Association of Boys' Schools of New…
Interactions Between Teaching Performance and Student Achievement.
ERIC Educational Resources Information Center
Hsu, Yi-Ming; White, William F.
There are two purposes for this study: first, to examine the relationship between college students' achievement and their ratings of instructors; second, to validate the two selected evaluation instruments that were designed specially for assessing teaching performance at the higher education level. Two evaluation inventories were selected for…
Academic Freedom, Achievement Standards and Professional Identity
ERIC Educational Resources Information Center
Sadler, D. Royce
2011-01-01
The tension between the freedom of academics to grade the achievements of their students without interference or coercion and the prerogative of higher education institutions to control grading standards is often deliberated by weighing up the authority and rights of the two parties. An alternative approach is to start with an analysis of the…
Reasoning about systolic algorithms
Purushothaman, S.
1986-01-01
Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.
Algorithm-development activities
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1994-01-01
The task of algorithm-development activities at USF continues. The algorithm for determining chlorophyll alpha concentration, (Chl alpha) and gelbstoff absorption coefficient for SeaWiFS and MODIS-N radiance data is our current priority.
Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2010-01-01
The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations
Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence
2005-07-15
The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster.
INSENS classification algorithm report
Hernandez, J.E.; Frerking, C.J.; Myers, D.W.
1993-07-28
This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Clutter discrimination algorithm simulation in pulse laser radar imaging
NASA Astrophysics Data System (ADS)
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
Distilling the Verification Process for Prognostics Algorithms
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Saxena, Abhinav; Celaya, Jose R.; Goebel, Kai
2013-01-01
The goal of prognostics and health management (PHM) systems is to ensure system safety, and reduce downtime and maintenance costs. It is important that a PHM system is verified and validated before it can be successfully deployed. Prognostics algorithms are integral parts of PHM systems. This paper investigates a systematic process of verification of such prognostics algorithms. To this end, first, this paper distinguishes between technology maturation and product development. Then, the paper describes the verification process for a prognostics algorithm as it moves up to higher maturity levels. This process is shown to be an iterative process where verification activities are interleaved with validation activities at each maturation level. In this work, we adopt the concept of technology readiness levels (TRLs) to represent the different maturity levels of a prognostics algorithm. It is shown that at each TRL, the verification of a prognostics algorithm depends on verifying the different components of the algorithm according to the requirements laid out by the PHM system that adopts this prognostics algorithm. Finally, using simplified examples, the systematic process for verifying a prognostics algorithm is demonstrated as the prognostics algorithm moves up TRLs.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
An SMP soft classification algorithm for remote sensing
NASA Astrophysics Data System (ADS)
Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.
2014-07-01
This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.
Student academic achievement in college chemistry
NASA Astrophysics Data System (ADS)
Tabibzadeh, Kiana S.
General Chemistry is required for variety of baccalaureate degrees, including all medical related fields, engineering, and science majors. Depending on the institution, the prerequisite requirement for college level General Chemistry varies. The success rate for this course is low. The purpose of this study is to examine the factors influencing student academic achievement and retention in General Chemistry at the college level. In this study student achievement is defined by those students who earned grades of "C" or better. The dissertation contains in-depth studies on influence of Intermediate Algebra as a prerequisite compared to Fundamental Chemistry for student academic achievement and student retention in college General Chemistry. In addition the study examined the extent and manner in which student self-efficacy influences student academic achievement in college level General Chemistry. The sample for this part of the study is 144 students enrolled in first semester college level General Chemistry. Student surveys determined student self-efficacy level. The statistical analyses of study demonstrated that Fundamental Chemistry is a better prerequisite for student academic achievement and student retention. The study also found that student self-efficacy has no influence on student academic achievement. The significance of this study will be to provide data for the purpose of establishing a uniform and most suitable prerequisite for college level General Chemistry. Finally the variables identified to influence student academic achievement and enhance student retention will support educators' mission to maximize the students' ability to complete their educational goal at institutions of higher education.
Attitude Towards Physics and Additional Mathematics Achievement Towards Physics Achievement
ERIC Educational Resources Information Center
Veloo, Arsaythamby; Nor, Rahimah; Khalid, Rozalina
2015-01-01
The purpose of this research is to identify the difference in students' attitude towards Physics and Additional Mathematics achievement based on gender and relationship between attitudinal variables towards Physics and Additional Mathematics achievement with achievement in Physics. This research focused on six variables, which is attitude towards…
The Impact of Reading Achievement on Overall Academic Achievement
ERIC Educational Resources Information Center
Churchwell, Dawn Earheart
2009-01-01
This study examined the relationship between reading achievement and achievement in other subject areas. The purpose of this study was to determine if there was a correlation between reading scores as measured by the Standardized Test for the Assessment of Reading (STAR) and academic achievement in language arts, math, science, and social studies…
Algorithm of chest wall keloid treatment
Long, Xiao; Zhang, Mingzi; Wang, Yang; Zhao, Ru; Wang, Youbin; Wang, Xiaojun
2016-01-01
Abstract Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids. A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6–18 months), 362 patients participated in the assessment of POSAS with doctors. Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect. Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment. PMID:27583896
Algorithm of chest wall keloid treatment.
Long, Xiao; Zhang, Mingzi; Wang, Yang; Zhao, Ru; Wang, Youbin; Wang, Xiaojun
2016-08-01
Keloids are common in the Asian population. Multiple or huge keloids can appear on the chest wall because of its tendency to develop acne, sebaceous cyst, etc. It is difficult to find an ideal treatment for keloids in this area due to the limit of local soft tissues and higher recurrence rate. This study aims at establishing an individualized protocol that could be easily applied according to the size and number of chest wall keloids.A total of 445 patients received various methods (4 protocols) of treatment in our department from September 2006 to September 2012 according to the size and number of their chest wall keloids. All of the patients received adjuvant radiotherapy in our hospital. Patient and Observer Scar Assessment Scale (POSAS) was used to assess the treatment effect by both doctors and patients. With mean follow-up time of 13 months (range: 6-18 months), 362 patients participated in the assessment of POSAS with doctors.Both the doctors and the patients themselves used POSAS to evaluate the treatment effect. The recurrence rate was 0.83%. There was an obvious significant difference (P < 0.001) between the before-surgery score and the after-surgery score from both doctors and patients, indicating that both doctors and patients were satisfied with the treatment effect.Our preliminary clinical result indicates that good clinical results could be achieved by choosing the proper method in this algorithm for Chinese patients with chest wall keloids. This algorithm could play a guiding role for surgeons when dealing with chest wall keloid treatment. PMID:27583896
The Economic Value of Higher Teacher Quality
ERIC Educational Resources Information Center
Hanushek, Eric A.
2011-01-01
Most analyses of teacher quality end without any assessment of the economic value of altered teacher quality. This paper combines information about teacher effectiveness with the economic impact of higher achievement. It begins with an overview of what is known about the relationship between teacher quality and student achievement. This provides…
Five-dimensional Janis-Newman algorithm
NASA Astrophysics Data System (ADS)
Erbin, Harold; Heurtier, Lucien
2015-08-01
The Janis-Newman algorithm has been shown to be successful in finding new stationary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular momentum. In this paper we propose an extension of this algorithm to five-dimensions with two angular momenta—using the prescription of Giampieri—through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d=3,4,5 of the Janis-Newman algorithm, from which several examples are exposed, including the BTZ black hole.
Higher Education Exchange, 2012
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2012-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2010
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2010-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2008
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2008-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2014
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2014-01-01
Research shows that not only does higher education not see the public; when the public, in turn, looks at higher education, it sees mostly malaise, inefficiencies, expense, and unfulfilled promises. Yet, the contributors to this issue of the "Higher Education Exchange" tell of bright spots in higher education where experiments in working…
Higher Education Exchange, 2004
ERIC Educational Resources Information Center
Brown, David W., Ed; Witte, Deborah, Ed.
2004-01-01
The Higher Education Exchange is part of a movement to strengthen higher education's democratic mission and foster a more democratic culture throughout American society. Working in this tradition, the Higher Education Exchange publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic…
Higher Education Exchange, 2005
ERIC Educational Resources Information Center
Brown, David W., Ed; Witte, Deborah, Ed.
2005-01-01
The "Higher Education Exchange" is part of a movement to strengthen higher education's democratic mission and foster a more democratic culture throughout American society. Working in this tradition, the "Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic…
Higher Education Exchange, 2011
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2011-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Cherokee Culture and School Achievement.
ERIC Educational Resources Information Center
Brown, Anthony D.
1980-01-01
Compares the effect of cooperative and competitive behaviors of Cherokee and Anglo American elementary school students on academic achievement. Suggests changes in teaching techniques and lesson organization that might raise academic achievement while taking into consideration tribal traditions that limit scholastic achievement in an…
Facial Composite System Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zahradníková, Barbora; Duchovičová, Soňa; Schreiber, Peter
2014-12-01
The article deals with genetic algorithms and their application in face identification. The purpose of the research is to develop a free and open-source facial composite system using evolutionary algorithms, primarily processes of selection and breeding. The initial testing proved higher quality of the final composites and massive reduction in the composites processing time. System requirements were specified and future research orientation was proposed in order to improve the results.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136
The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging.
Matrone, Giulia; Savoia, Alessandro Stuart; Caliano, Giosue; Magenes, Giovanni
2015-04-01
Most of ultrasound medical imaging systems currently on the market implement standard Delay and Sum (DAS) beamforming to form B-mode images. However, image resolution and contrast achievable with DAS are limited by the aperture size and by the operating frequency. For this reason, different beamformers have been presented in the literature that are mainly based on adaptive algorithms, which allow achieving higher performance at the cost of an increased computational complexity. In this paper, we propose the use of an alternative nonlinear beamforming algorithm for medical ultrasound imaging, which is called Delay Multiply and Sum (DMAS) and that was originally conceived for a RADAR microwave system for breast cancer detection. We modify the DMAS beamformer and test its performance on both simulated and experimentally collected linear-scan data, by comparing the Point Spread Functions, beampatterns, synthetic phantom and in vivo carotid artery images obtained with standard DAS and with the proposed algorithm. Results show that the DMAS beamformer outperforms DAS in both simulated and experimental trials and that the main improvement brought about by this new method is a significantly higher contrast resolution (i.e., narrower main lobe and lower side lobes), which turns out into an increased dynamic range and better quality of B-mode images.
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm
Yang, Zhang; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.
Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding
2016-01-01
The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428
[The correlation based mid-infrared temperature and emissivity separation algorithm].
Cheng, Jie; Nie, Ai-Xiu; Du, Yong-Ming
2009-02-01
Temperature and emissivity separation is the key problem in infrared remote sensing. Based on the analysis of the relationship between the atmospheric downward radiance and surface emissivity containing atmosphere residue without the effects of sun irradiation, the present paper puts forward a temperature and emissivity separation algorithm for the ground-based mid-infrared hyperspectral data. The algorithm uses the correlation between the atmospheric downward radiance and surface emissivity containing atmosphere residue as a criterion to optimize the surface temperature, and the correlation between the atmospheric downward radiance and surface emissivity containing atmosphere residue depends on the bias between the estimated surface temperature and true surface temperature. The larger the temperature bias, the greater the correlation. Once we have obtained the surface temperature, the surface emissivity can be calculated easily. The accuracy of the algorithm was evaluated with the simulated mid-infrared hyperspectral data. The results of simulated calculation show that the algorithm can achieve higher accuracy of temperature and emissivity inversion, and also has broad applicability. Meanwhile, the algorithm is insensitive to the instrumental random noise and the change in atmospheric downward radiance during the field measurements.
Peak detection in fiber Bragg grating using a fast phase correlation algorithm
NASA Astrophysics Data System (ADS)
Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.
2014-05-01
Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.
An automatic geo-spatial object recognition algorithm for high resolution satellite images
NASA Astrophysics Data System (ADS)
Ergul, Mustafa; Alatan, A. Aydın.
2013-10-01
This paper proposes a novel automatic geo-spatial object recognition algorithm for high resolution satellite imaging. The proposed algorithm consists of two main steps; a hypothesis generation step with a local feature-based algorithm and a verification step with a shape-based approach. In the hypothesis generation step, a set of hypothesis for possible object locations is generated, aiming lower missed detections and higher false-positives by using a Bag of Visual Words type approach. In the verification step, the foreground objects are first extracted by a semi-supervised image segmentation algorithm, utilizing detection results from the previous step, and then, the shape descriptors for segmented objects are utilized to prune out the false positives. Based on simulation results, it can be argued that the proposed algorithm achieves both high precision and high recall rates as a result of taking advantage of both the local feature-based and the shape-based object detection approaches. The superiority of the proposed method is due to the ability of minimization of false alarm rate and since most of the object shapes contain more characteristic and discriminative information about their identity and functionality.
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.
Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.
iCut: an Integrative Cut Algorithm Enables Accurate Segmentation of Touching Cells
He, Yong; Gong, Hui; Xiong, Benyi; Xu, Xiaofeng; Li, Anan; Jiang, Tao; Sun, Qingtao; Wang, Simin; Luo, Qingming; Chen, Shangbin
2015-01-01
Individual cells play essential roles in the biological processes of the brain. The number of neurons changes during both normal development and disease progression. High-resolution imaging has made it possible to directly count cells. However, the automatic and precise segmentation of touching cells continues to be a major challenge for massive and highly complex datasets. Thus, an integrative cut (iCut) algorithm, which combines information regarding spatial location and intervening and concave contours with the established normalized cut, has been developed. iCut involves two key steps: (1) a weighting matrix is first constructed with the abovementioned information regarding the touching cells and (2) a normalized cut algorithm that uses the weighting matrix is implemented to separate the touching cells into isolated cells. This novel algorithm was evaluated using two types of data: the open SIMCEP benchmark dataset and our micro-optical imaging dataset from a Nissl-stained mouse brain. It has achieved a promising recall/precision of 91.2 ± 2.1%/94.1 ± 1.8% and 86.8 ± 4.1%/87.5 ± 5.7%, respectively, for the two datasets. As quantified using the harmonic mean of recall and precision, the accuracy of iCut is higher than that of some state-of-the-art algorithms. The better performance of this fully automated algorithm can benefit studies of brain cytoarchitecture. PMID:26168908
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.
A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection
Chen, Yaw-Chung
2015-01-01
The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms. PMID:26437335
Transactional Algorithm for Subtracting Fractions: Go Shopping
ERIC Educational Resources Information Center
Pinckard, James Seishin
2009-01-01
The purpose of this quasi-experimental research study was to examine the effects of an alternative or transactional algorithm for subtracting mixed numbers within the middle school setting. Initial data were gathered from the student achievement of four mathematics teachers at three different school sites. The results indicated students who…
NASA Astrophysics Data System (ADS)
Schämann, M.; Bücker, M.; Hessel, S.; Langmann, U.
2008-05-01
High data rates combined with high mobility represent a challenge for the design of cellular devices. Advanced algorithms are required which result in higher complexity, more chip area and increased power consumption. However, this contrasts to the limited power supply of mobile devices. This presentation discusses the application of an HSDPA receiver which has been optimized regarding power consumption with the focus on the algorithmic and architectural level. On algorithmic level the Rake combiner, Prefilter-Rake equalizer and MMSE equalizer are compared regarding their BER performance. Both equalizer approaches provide a significant increase of performance for high data rates compared to the Rake combiner which is commonly used for lower data rates. For both equalizer approaches several adaptive algorithms are available which differ in complexity and convergence properties. To identify the algorithm which achieves the required performance with the lowest power consumption the algorithms have been investigated using SystemC models regarding their performance and arithmetic complexity. Additionally, for the Prefilter Rake equalizer the power estimations of a modified Griffith (LMS) and a Levinson (RLS) algorithm have been compared with the tool ORINOCO supplied by ChipVision. The accuracy of this tool has been verified with a scalable architecture of the UMTS channel estimation described both in SystemC and VHDL targeting a 130 nm CMOS standard cell library. An architecture combining all three approaches combined with an adaptive control unit is presented. The control unit monitors the current condition of the propagation channel and adjusts parameters for the receiver like filter size and oversampling ratio to minimize the power consumption while maintaining the required performance. The optimization strategies result in a reduction of the number of arithmetic operations up to 70% for single components which leads to an estimated power reduction of up to 40
Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement
Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara
2016-01-01
In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836
Semioptimal practicable algorithmic cooling
NASA Astrophysics Data System (ADS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-04-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon’s entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact
Du, Guanyao; Yu, Jianjun
2016-01-01
This paper investigates the system achievable rate for the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system with an energy harvesting (EH) relay. Firstly we propose two protocols, time switching-based decode-and-forward relaying (TSDFR) and a flexible power splitting-based DF relaying (PSDFR) protocol by considering two practical receiver architectures, to enable the simultaneous information processing and energy harvesting at the relay. In PSDFR protocol, we introduce a temporal parameter to describe the time division pattern between the two phases which makes the protocol more flexible and general. In order to explore the system performance limit, we discuss the system achievable rate theoretically and formulate two optimization problems for the proposed protocols to maximize the system achievable rate. Since the problems are non-convex and difficult to solve, we first analyze them theoretically and get some explicit results, then design an augmented Lagrangian penalty function (ALPF) based algorithm for them. Numerical results are provided to validate the accuracy of our analytical results and the effectiveness of the proposed ALPF algorithm. It is shown that, PSDFR outperforms TSDFR to achieve higher achievable rate in such a MIMO-OFDM relaying system. Besides, we also investigate the impacts of the relay location, the number of antennas and the number of subcarriers on the system performance. Specifically, it is shown that, the relay position greatly affects the system performance of both protocols, and relatively worse achievable rate is achieved when the relay is placed in the middle of the source and the destination. This is different from the MIMO-OFDM DF relaying system without EH. Moreover, the optimal factor which indicates the time division pattern between the two phases in the PSDFR protocol is always above 0.8, which means that, the common division of the total transmission time into two equal phases in
Du, Guanyao; Yu, Jianjun
2016-01-01
This paper investigates the system achievable rate for the multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) system with an energy harvesting (EH) relay. Firstly we propose two protocols, time switching-based decode-and-forward relaying (TSDFR) and a flexible power splitting-based DF relaying (PSDFR) protocol by considering two practical receiver architectures, to enable the simultaneous information processing and energy harvesting at the relay. In PSDFR protocol, we introduce a temporal parameter to describe the time division pattern between the two phases which makes the protocol more flexible and general. In order to explore the system performance limit, we discuss the system achievable rate theoretically and formulate two optimization problems for the proposed protocols to maximize the system achievable rate. Since the problems are non-convex and difficult to solve, we first analyze them theoretically and get some explicit results, then design an augmented Lagrangian penalty function (ALPF) based algorithm for them. Numerical results are provided to validate the accuracy of our analytical results and the effectiveness of the proposed ALPF algorithm. It is shown that, PSDFR outperforms TSDFR to achieve higher achievable rate in such a MIMO-OFDM relaying system. Besides, we also investigate the impacts of the relay location, the number of antennas and the number of subcarriers on the system performance. Specifically, it is shown that, the relay position greatly affects the system performance of both protocols, and relatively worse achievable rate is achieved when the relay is placed in the middle of the source and the destination. This is different from the MIMO-OFDM DF relaying system without EH. Moreover, the optimal factor which indicates the time division pattern between the two phases in the PSDFR protocol is always above 0.8, which means that, the common division of the total transmission time into two equal phases in
Quality control algorithms for rainfall measurements
NASA Astrophysics Data System (ADS)
Golz, Claudia; Einfalt, Thomas; Gabella, Marco; Germann, Urs
2005-09-01
One of the basic requirements for a scientific use of rain data from raingauges, ground and space radars is data quality control. Rain data could be used more intensively in many fields of activity (meteorology, hydrology, etc.), if the achievable data quality could be improved. This depends on the available data quality delivered by the measuring devices and the data quality enhancement procedures. To get an overview of the existing algorithms a literature review and literature pool have been produced. The diverse algorithms have been evaluated to meet VOLTAIRE objectives and sorted in different groups. To test the chosen algorithms an algorithm pool has been established, where the software is collected. A large part of this work presented here is implemented in the scope of the EU-project VOLTAIRE ( Validati on of mu ltisensors precipit ation fields and numerical modeling in Mediter ran ean test sites).
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Hesitant fuzzy agglomerative hierarchical clustering algorithms
NASA Astrophysics Data System (ADS)
Zhang, Xiaolu; Xu, Zeshui
2015-02-01
Recently, hesitant fuzzy sets (HFSs) have been studied by many researchers as a powerful tool to describe and deal with uncertain data, but relatively, very few studies focus on the clustering analysis of HFSs. In this paper, we propose a novel hesitant fuzzy agglomerative hierarchical clustering algorithm for HFSs. The algorithm considers each of the given HFSs as a unique cluster in the first stage, and then compares each pair of the HFSs by utilising the weighted Hamming distance or the weighted Euclidean distance. The two clusters with smaller distance are jointed. The procedure is then repeated time and again until the desirable number of clusters is achieved. Moreover, we extend the algorithm to cluster the interval-valued hesitant fuzzy sets, and finally illustrate the effectiveness of our clustering algorithms by experimental results.
Evaluation of TCP congestion control algorithms.
Long, Robert Michael
2003-12-01
Sandia, Los Alamos, and Lawrence Livermore National Laboratories currently deploy high speed, Wide Area Network links to permit remote access to their Supercomputer systems. The current TCP congestion algorithm does not take full advantage of high delay, large bandwidth environments. This report involves evaluating alternative TCP congestion algorithms and comparing them with the currently used congestion algorithm. The goal was to find if an alternative algorithm could provide higher throughput with minimal impact on existing network traffic. The alternative congestion algorithms used were Scalable TCP and High-Speed TCP. Network lab experiments were run to record the performance of each algorithm under different network configurations. The network configurations used were back-to-back with no delay, back-to-back with a 30ms delay, and two-to-one with a 30ms delay. The performance of each algorithm was then compared to the existing TCP congestion algorithm to determine if an acceptable alternative had been found. Comparisons were made based on throughput, stability, and fairness.
A Palmprint Recognition Algorithm Using Phase-Only Correlation
NASA Astrophysics Data System (ADS)
Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo
This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.
Variable depth recursion algorithm for leaf sequencing
Siochi, R. Alfredo C.
2007-02-15
The processes of extraction and sweep are basic segmentation steps that are used in leaf sequencing algorithms. A modified version of a commercial leaf sequencer changed the way that the extracts are selected and expanded the search space, but the modification maintained the basic search paradigm of evaluating multiple solutions, each one consisting of up to 12 extracts and a sweep sequence. While it generated the best solutions compared to other published algorithms, it used more computation time. A new, faster algorithm selects one extract at a time but calls itself as an evaluation function a user-specified number of times, after which it uses the bidirectional sweeping window algorithm as the final evaluation function. To achieve a performance comparable to that of the modified commercial leaf sequencer, 2-3 calls were needed, and in all test cases, there were only slight improvements beyond two calls. For the 13 clinical test maps, computation speeds improved by a factor between 12 and 43, depending on the constraints, namely the ability to interdigitate and the avoidance of the tongue-and-groove under dose. The new algorithm was compared to the original and modified versions of the commercial leaf sequencer. It was also compared to other published algorithms for 1400, random, 15x15, test maps with 3-16 intensity levels. In every single case the new algorithm provided the best solution.
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels. PMID:24212035
ERIC Educational Resources Information Center
Carter, Dorinda J.
2008-01-01
In this article, Dorinda Carter examines the embodiment of a critical race achievement ideology in high-achieving black students. She conducted a yearlong qualitative investigation of the adaptive behaviors that nine high-achieving black students developed and employed to navigate the process of schooling at an upper-class, predominantly white,…
Algorithm for Autonomous Landing
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki
2011-01-01
Because of their small size, high maneuverability, and easy deployment, micro aerial vehicles (MAVs) are used for a wide variety of both civilian and military missions. One of their current drawbacks is the vast array of sensors (such as GPS, altimeter, radar, and the like) required to make a landing. Due to the MAV s small payload size, this is a major concern. Replacing the imaging sensors with a single monocular camera is sufficient to land a MAV. By applying optical flow algorithms to images obtained from the camera, time-to-collision can be measured. This is a measurement of position and velocity (but not of absolute distance), and can avoid obstacles as well as facilitate a landing on a flat surface given a set of initial conditions. The key to this approach is to calculate time-to-collision based on some image on the ground. By holding the angular velocity constant, horizontal speed decreases linearly with the height, resulting in a smooth landing. Mathematical proofs show that even with actuator saturation or modeling/ measurement uncertainties, MAVs can land safely. Landings of this nature may have a higher velocity than is desirable, but this can be compensated for by a cushioning or dampening system, or by using a system of legs to grab onto a surface. Such a monocular camera system can increase vehicle payload size (or correspondingly reduce vehicle size), increase speed of descent, and guarantee a safe landing by directly correlating speed to height from the ground.
An Investigation of Algorithm Justification in Elementary School Mathematics.
ERIC Educational Resources Information Center
Weinstein, Marian Sue
The purpose of this study was to determine differences in achievement of fifth-grade students in 16 classes taught two of four mathematical algorithms by one of four instructional strategies. Computational skills and ability to extend the algorithm were tested. The instructional strategies were: pattern, algebraic, pattern followed by…
Comparison of Beam-Based Alignment Algorithms for the ILC
Smith, J.C.; Gibbons, L.; Patterson, J.R.; Rubin, D.L.; Sagan, D.; Tenenbaum, P.; /SLAC
2006-03-15
The main linac of the International Linear Collider (ILC) requires more sophisticated alignment techniques than those provided by survey alone. Various Beam-Based Alignment (BBA) algorithms have been proposed to achieve the desired low emittance preservation. Dispersion Free Steering, Ballistic Alignment and the Kubo method are compared. Alignment algorithms are also tested in the presence of an Earth-like stray field.
The Centrality of Engagement in Higher Education
ERIC Educational Resources Information Center
Fitzgerald, Hiram E.; Bruns, Karen; Sonka, Steven T.; Furco, Andrew; Swanson, Louis
2016-01-01
The centrality of engagement is critical to the success of higher education in the future. Engagement is essential to most effectively achieving the overall purpose of the university, which is focused on the knowledge enterprise. Today's engagement is scholarly, is an aspect of learning and discovery, and enhances society and higher education.…
The Centrality of Engagement in Higher Education
ERIC Educational Resources Information Center
Fitzgerald, Hiram E.; Bruns, Karen; Sonka, Steven T.; Furco, Andrew; Swanson, Louis
2012-01-01
The centrality of engagement is critical to the success of higher education in the future. Engagement is essential to most effectively achieving the overall purpose of the university, which is focused on the knowledge enterprise. Today's engagement is scholarly, is an aspect of learning and discovery, and enhances society and higher education.…
Reasoning about systolic algorithms
Purushothaman, S.; Subrahmanyam, P.A.
1988-12-01
The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.
Improved Bat Algorithm Applied to Multilevel Image Thresholding
2014-01-01
Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733
Competing Sudakov veto algorithms
NASA Astrophysics Data System (ADS)
Kleiss, Ronald; Verheyen, Rob
2016-07-01
We present a formalism to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The formal analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance in a semi-realistic setting and show that there are significantly faster alternatives to the commonly used algorithms.
Network Algorithms for Detection of Radiation Sources
Rao, Nageswara S; Brooks, Richard R; Wu, Qishi
2014-01-01
estimate, typically specified as a multiplier of the background radiation level. A judicious selection of this source multiplier is essential to achieve optimal detection probability at a specified false alarm rate. Typically, this threshold is chosen from the Receiver Operating Characteristic (ROC) by varying the source multiplier estimate. ROC is expected to have a monotonically increasing profile between the detection probability and false alarm rate. We derived ROCs for multiple indoor tests using KMB datasets, which revealed an unexpected loop shape: as the multiplier increases, detection probability and false alarm rate both increase until a limit, and then both contract. Consequently, two detection probabilities correspond to the same false alarm rate, and the higher is achieved at a lower multiplier, which is the desired operating point. Using the Chebyshev s inequality we analytically confirm this shape. Then, we present two improved network-SPRT methods by (a) using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and (b) applying a composite SPRT derived using measurements from all counters.
The Higher Education Enterprise.
ERIC Educational Resources Information Center
Ottinger, Cecilia A.
1991-01-01
Higher education not only contributes to the development of the human resources and intellectual betterment of the nation but is also a major economic enterprise. This research brief reviews and highlights data on the size and growth of higher education and illustrates how higher education institutions are preparing the future labor force. It…
Reflections on "Higher Education"
ERIC Educational Resources Information Center
Gilbert, Felix
1974-01-01
The elitist, professional, and philosophical elements of higher education are reflected upon with stress on the differences between higher education and higher learning, where education is concerned with giving wider groups a share in a broad image of man, and learning is concerned with increasing specialization. (JH)
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-01-01
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-01-01
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-08-14
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
The Mechanics of Human Achievement
Duckworth, Angela L.; Eichstaedt, Johannes C.; Ungar, Lyle H.
2015-01-01
Countless studies have addressed why some individuals achieve more than others. Nevertheless, the psychology of achievement lacks a unifying conceptual framework for synthesizing these empirical insights. We propose organizing achievement-related traits by two possible mechanisms of action: Traits that determine the rate at which an individual learns a skill are talent variables and can be distinguished conceptually from traits that determine the effort an individual puts forth. This approach takes inspiration from Newtonian mechanics: achievement is akin to distance traveled, effort to time, skill to speed, and talent to acceleration. A novel prediction from this model is that individual differences in effort (but not talent) influence achievement (but not skill) more substantially over longer (rather than shorter) time intervals. Conceptualizing skill as the multiplicative product of talent and effort, and achievement as the multiplicative product of skill and effort, advances similar, but less formal, propositions by several important earlier thinkers. PMID:26236393
Mathematics Achievement in High- and Low-Achieving Secondary Schools
ERIC Educational Resources Information Center
Mohammadpour, Ebrahim; Shekarchizadeh, Ahmadreza
2015-01-01
This paper identifies the amount of variance in mathematics achievement in high- and low-achieving schools that can be explained by school-level factors, while controlling for student-level factors. The data were obtained from 2679 Iranian eighth graders who participated in the 2007 Trends in International Mathematics and Science Study. Of the…
Attribution theory in science achievement
NASA Astrophysics Data System (ADS)
Craig, Martin
Recent research reveals consistent lags in American students' science achievement scores. Not only are the scores lower in the United States compared to other developed nations, but even within the United States, too many students are well below science proficiency scores for their grade levels. The current research addresses this problem by examining potential malleable factors that may predict science achievement in twelfth graders using 2009 data from the National Assessment of Educational Progress (NAEP). Principle component factor analysis was conducted to determine the specific items that contribute to each overall factor. A series of multiple regressions were then analyzed and formed the predictive value of each of these factors for science achievement. All significant factors were ultimately examined together (also using multiple regression) to determine the most powerful predictors of science achievement, identifying factors that predict science achievement, the results of which suggested interventions to strengthen students' science achievement scores and encourage persistence in the sciences at the college level and beyond. Although there is a variety of research highlighting how students in the US are falling behind other developing nations in science and math achievement, as yet, little research has addressed ways of intervening to address this gap. The current research is a starting point, seeking to identify malleable factors that contribute to science achievement. More specifically, this research examined the types of attributions that predict science achievement in twelfth grade students.
Efficient Record Linkage Algorithms Using Complete Linkage Clustering
Mamun, Abdullah-Al; Aseltine, Robert; Rajasekaran, Sanguthevar
2016-01-01
Data from different agencies share data of the same individuals. Linking these datasets to identify all the records belonging to the same individuals is a crucial and challenging problem, especially given the large volumes of data. A large number of available algorithms for record linkage are prone to either time inefficiency or low-accuracy in finding matches and non-matches among the records. In this paper we propose efficient as well as reliable sequential and parallel algorithms for the record linkage problem employing hierarchical clustering methods. We employ complete linkage hierarchical clustering algorithms to address this problem. In addition to hierarchical clustering, we also use two other techniques: elimination of duplicate records and blocking. Our algorithms use sorting as a sub-routine to identify identical copies of records. We have tested our algorithms on datasets with millions of synthetic records. Experimental results show that our algorithms achieve nearly 100% accuracy. Parallel implementations achieve almost linear speedups. Time complexities of these algorithms do not exceed those of previous best-known algorithms. Our proposed algorithms outperform previous best-known algorithms in terms of accuracy consuming reasonable run times. PMID:27124604
Operational algorithm development and refinement approaches
NASA Astrophysics Data System (ADS)
Ardanuy, Philip E.
2003-11-01
Next-generation polar and geostationary systems, such as the National Polar-orbiting Operational Environmental Satellite System (NPOESS) and the Geostationary Operational Environmental Satellite (GOES)-R, will deploy new generations of electro-optical reflective and emissive capabilities. These will include low-radiometric-noise, improved spatial resolution multi-spectral and hyperspectral imagers and sounders. To achieve specified performances (e.g., measurement accuracy, precision, uncertainty, and stability), and best utilize the advanced space-borne sensing capabilities, a new generation of retrieval algorithms will be implemented. In most cases, these advanced algorithms benefit from ongoing testing and validation using heritage research mission algorithms and data [e.g., the Earth Observing System (EOS)] Moderate-resolution Imaging Spectroradiometer (MODIS) and Shuttle Ozone Limb Scattering Experiment (SOLSE)/Limb Ozone Retreival Experiment (LORE). In these instances, an algorithm's theoretical basis is not static, but rather improves with time. Once frozen, an operational algorithm can "lose ground" relative to research analogs. Cost/benefit analyses provide a basis for change management. The challenge is in reconciling and balancing the stability, and "comfort," that today"s generation of operational platforms provide (well-characterized, known, sensors and algorithms) with the greatly improved quality, opportunities, and risks, that the next generation of operational sensors and algorithms offer. By using the best practices and lessons learned from heritage/groundbreaking activities, it is possible to implement an agile process that enables change, while managing change. This approach combines a "known-risk" frozen baseline with preset completion schedules with insertion opportunities for algorithm advances as ongoing validation activities identify and repair areas of weak performance. This paper describes an objective, adaptive implementation roadmap that
Algorithm That Synthesizes Other Algorithms for Hashing
NASA Technical Reports Server (NTRS)
James, Mark
2010-01-01
An algorithm that includes a collection of several subalgorithms has been devised as a means of synthesizing still other algorithms (which could include computer code) that utilize hashing to determine whether an element (typically, a number or other datum) is a member of a set (typically, a list of numbers). Each subalgorithm synthesizes an algorithm (e.g., a block of code) that maps a static set of key hashes to a somewhat linear monotonically increasing sequence of integers. The goal in formulating this mapping is to cause the length of the sequence thus generated to be as close as practicable to the original length of the set and thus to minimize gaps between the elements. The advantage of the approach embodied in this algorithm is that it completely avoids the traditional approach of hash-key look-ups that involve either secondary hash generation and look-up or further searching of a hash table for a desired key in the event of collisions. This algorithm guarantees that it will never be necessary to perform a search or to generate a secondary key in order to determine whether an element is a member of a set. This algorithm further guarantees that any algorithm that it synthesizes can be executed in constant time. To enforce these guarantees, the subalgorithms are formulated to employ a set of techniques, each of which works very effectively covering a certain class of hash-key values. These subalgorithms are of two types, summarized as follows: Given a list of numbers, try to find one or more solutions in which, if each number is shifted to the right by a constant number of bits and then masked with a rotating mask that isolates a set of bits, a unique number is thereby generated. In a variant of the foregoing procedure, omit the masking. Try various combinations of shifting, masking, and/or offsets until the solutions are found. From the set of solutions, select the one that provides the greatest compression for the representation and is executable in the
The hierarchical algorithms--theory and applications
NASA Astrophysics Data System (ADS)
Su, Zheng-Yao
scan scheme applicable to problem domains of any high dimension and of arbitrary geometry (scan is an important primitive of parallel computing). In addition, from implementation results, the hierarchical cluster labeling algorithm has proved to work equally well on MIMD machines, though originally designed for SIMD machines.Based on this success, we further study the hierarchical structure hidden in the algorithm. Hierarchical structure is a conceptual framework frequently used in building models for the study of a great variety of problems. This structure serves not only to describe the complexity of the system at different levels, but also to achieve some goals targeted by the problem, i.e., an algorithm to solve the problem. In this regard, we investigate the similarities and differences between this algorithm and others, including the FFT and the Barnes-Hut method, in terms of their hierarchical structures.
An improved localization algorithm based on genetic algorithm in wireless sensor networks.
Peng, Bo; Li, Lei
2015-04-01
Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.
Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.
2016-04-01
The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.
Lightning detection and exposure algorithms for smartphones
NASA Astrophysics Data System (ADS)
Wang, Haixin; Shao, Xiaopeng; Wang, Lin; Su, Laili; Huang, Yining
2015-05-01
This study focuses on the key theory of lightning detection, exposure and the experiments. Firstly, the algorithm based on differential operation between two adjacent frames is selected to remove the lightning background information and extract lighting signal, and the threshold detection algorithm is applied to achieve the purpose of precise detection of lightning. Secondly, an algorithm is proposed to obtain scene exposure value, which can automatically detect external illumination status. Subsequently, a look-up table could be built on the basis of the relationships between the exposure value and average image brightness to achieve rapid automatic exposure. Finally, based on a USB 3.0 industrial camera including a CMOS imaging sensor, a set of hardware test platform is established and experiments are carried out on this platform to verify the performances of the proposed algorithms. The algorithms can effectively and fast capture clear lightning pictures such as special nighttime scenes, which will provide beneficial supporting to the smartphone industry, since the current exposure methods in smartphones often lost capture or induce overexposed or underexposed pictures.
NASA Astrophysics Data System (ADS)
An, Lin; Shen, Tueng T.; Wang, Ruikang K.
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.
Algorithms for skiascopy measurement automatization
NASA Astrophysics Data System (ADS)
Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta
2014-10-01
Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.
Totally parallel multilevel algorithms
NASA Technical Reports Server (NTRS)
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Perils of Standardized Achievement Testing
ERIC Educational Resources Information Center
Haladyna, Thomas M.
2006-01-01
This article argues that the validity of standardized achievement test-score interpretation and use is problematic; consequently, confidence and trust in such test scores may often be unwarranted. The problem is particularly severe in high-stakes situations. This essay provides a context for understanding standardized achievement testing, then…
Poor Results for High Achievers
ERIC Educational Resources Information Center
Bui, Sa; Imberman, Scott; Craig, Steven
2012-01-01
Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…
Examination Regimes and Student Achievement
ERIC Educational Resources Information Center
Cosentino de Cohen, Clemencia
2010-01-01
Examination regimes at the end of secondary school vary greatly intra- and cross-nationally, and in recent years have undergone important reforms often geared towards increasing student achievement. This research presents a comparative analysis of the relationship between examination regimes and student achievement in the OECD. Using a micro…
General Achievement Trends: New Jersey
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
Teaching the Low Level Achiever.
ERIC Educational Resources Information Center
Salomone, Ronald E., Ed.
1986-01-01
Intended for teachers of the English language arts, the articles in this issue offer suggestions and techniques for teaching the low level achiever. Titles and authors of the articles are as follows: (1) "A Point to Ponder" (Rachel Martin); (2) "Tracking: A Self-Fulfilling Prophecy of Failure for the Low Level Achiever" (James Christopher Davis);…
Family Status and School Achievement.
ERIC Educational Resources Information Center
Chalker, Rhoda N.; Horns, Virginia
This study tested the hypothesis that there is no significant difference in reading achievement among children in grades 2 through 5 related to family structure. Researchers administered the Stanford Achievement Test to 119 students in an Alabama city suburban school system. Of the sample, 69 children lived in intact families and 50 lived in…
General Achievement Trends: North Carolina
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
Classroom Composition and Achievement Gains.
ERIC Educational Resources Information Center
Leiter, Jeffrey
1983-01-01
Third-grade students in high ability groups in mathematics achieved greater gains than students in low ability groups. The opposite results occurred in reading achievement. Possible reasons for this difference include different instructional techniques for reading and math and the effect of home environment on learning. (IS)
Raising Boys' Achievement in Schools.
ERIC Educational Resources Information Center
Bleach, Kevan, Ed.
This book offers insights into the range of strategies and good practice being used to raise the achievement of boys. Case studies by school-based practitioners suggest ideas and measures to address the issue of achievement by boys. The contributions are: (1) "Why the Likely Lads Lag Behind" (Kevan Bleach); (2) "Helping Boys Do Better in Their…
School Size and Student Achievement
ERIC Educational Resources Information Center
Riggen, Vicki
2013-01-01
This study examined whether a relationship between high school size and student achievement exists in Illinois public high schools in reading and math, as measured by the Prairie State Achievement Exam (PSAE), which is administered to all Illinois 11th-grade students. This study also examined whether the factors of socioeconomic status, English…
Stress Correlates and Academic Achievement.
ERIC Educational Resources Information Center
Bentley, Donna Anderson; And Others
An ongoing concern for educators is the identification of factors that contribute to or are associated with academic achievement; one such group of variables that has received little attention are those involving stress. The relationship between perceived sources of stress and academic achievement was examined to determine if reactions to stress…
LCD motion blur: modeling, analysis, and algorithm.
Chan, Stanley H; Nguyen, Truong Q
2011-08-01
Liquid crystal display (LCD) devices are well known for their slow responses due to the physical limitations of liquid crystals. Therefore, fast moving objects in a scene are often perceived as blurred. This effect is known as the LCD motion blur. In order to reduce LCD motion blur, an accurate LCD model and an efficient deblurring algorithm are needed. However, existing LCD motion blur models are insufficient to reflect the limitation of human-eye-tracking system. Also, the spatiotemporal equivalence in LCD motion blur models has not been proven directly in the discrete 2-D spatial domain, although it is widely used. There are three main contributions of this paper: modeling, analysis, and algorithm. First, a comprehensive LCD motion blur model is presented, in which human-eye-tracking limits are taken into consideration. Second, a complete analysis of spatiotemporal equivalence is provided and verified using real video sequences. Third, an LCD motion blur reduction algorithm is proposed. The proposed algorithm solves an l(1)-norm regularized least-squares minimization problem using a subgradient projection method. Numerical results show that the proposed algorithm gives higher peak SNR, lower temporal error, and lower spatial error than motion-compensated inverse filtering and Lucy-Richardson deconvolution algorithm, which are two state-of-the-art LCD deblurring algorithms. PMID:21292596
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Li, G; Sanchez, V; Nagaraj, P C S B; Khan, S; Rajpoot, N
2015-12-01
We propose a novel multitarget tracking framework for Myosin VI protein molecules in total internal reflection fluorescence microscopy sequences which integrates an extended Hungarian algorithm with an interacting multiple model filter. The extended Hungarian algorithm, which is a linear assignment problem based method, helps to solve measurement assignment and spot association problems commonly encountered when dealing with multiple targets, although a two-motion model interacting multiple model filter increases the tracking accuracy by modelling the nonlinear dynamics of Myosin VI protein molecules on actin filaments. The evaluation of our tracking framework is conducted on both real and synthetic total internal reflection fluorescence microscopy sequences. The results show that the framework achieves higher tracking accuracies compared to the state-of-the-art tracking methods, especially for sequences with high spot density. PMID:26259144
An RSVM based two-teachers-one-student semi-supervised learning algorithm.
Chang, Chien-Chung; Pao, Hsing-Kuo; Lee, Yuh-Jye
2012-01-01
Based on the reduced SVM, we propose a multi-view algorithm, two-teachers-one-student, for semi-supervised learning. With RSVM, different from typical multi-view methods, reduced sets suggest different views in the represented kernel feature space rather than in the input space. No label information is necessary when we select reduced sets, and this makes applying RSVM to SSL possible. Our algorithm blends the concepts of co-training and consensus training. Through co-training, the classifiers generated by two views can "teach" the third classifier from the remaining view to learn, and this process is performed for each choice of teachers-student combination. By consensus training, predictions from more than one view can give us higher confidence for labeling unlabeled data. The results show that the proposed 2T1S achieves high cross-validation accuracy, even compared to the training with all the label information available.
A hierarchical exact accelerated stochastic simulation algorithm
Orendorff, David; Mjolsness, Eric
2012-01-01
A new algorithm, “HiER-leap” (hierarchical exact reaction-leaping), is derived which improves on the computational properties of the ER-leap algorithm for exact accelerated simulation of stochastic chemical kinetics. Unlike ER-leap, HiER-leap utilizes a hierarchical or divide-and-conquer organization of reaction channels into tightly coupled “blocks” and is thereby able to speed up systems with many reaction channels. Like ER-leap, HiER-leap is based on the use of upper and lower bounds on the reaction propensities to define a rejection sampling algorithm with inexpensive early rejection and acceptance steps. But in HiER-leap, large portions of intra-block sampling may be done in parallel. An accept/reject step is used to synchronize across blocks. This method scales well when many reaction channels are present and has desirable asymptotic properties. The algorithm is exact, parallelizable and achieves a significant speedup over the stochastic simulation algorithm and ER-leap on certain problems. This algorithm offers a potentially important step towards efficient in silico modeling of entire organisms. PMID:23231214
New algorithms for the minimal form'' problem
Oliveira, J.S.; Cook, G.O. Jr. ); Purtill, M.R. . Center for Communications Research)
1991-12-20
It is widely appreciated that large-scale algebraic computation (performing computer algebra operations on large symbolic expressions) places very significant demands upon existing computer algebra systems. Because of this, parallel versions of many important algorithms have been successfully sought, and clever techniques have been found for improving the speed of the algebraic simplification process. In addition, some attention has been given to the issue of restructuring large expressions, or transforming them into minimal forms.'' By minimal form,'' we mean that form of an expression that involves a minimum number of operations in the sense that no simple transformation on the expression leads to a form involving fewer operations. Unfortunately, the progress that has been achieved to date on this very hard problem is not adequate for the very significant demands of large computer algebra problems. In response to this situation, we have developed some efficient algorithms for constructing minimal forms.'' In this paper, the multi-stage algorithm in which these new algorithms operate is defined and the features of these algorithms are developed. In a companion paper, we introduce the core algebra engine of a new tool that provides the algebraic framework required for the implementation of these new algorithms.
OpenAD : algorithm implementation user guide.
Utke, J.
2004-05-13
Research in automatic differentiation has led to a number of tools that implement various approaches and algorithms for the most important programming languages. While all these tools have the same mathematical underpinnings, the actual implementations have little in common and mostly are specialized for a particular programming language, compiler internal representation, or purpose. This specialization does not promote an open test bed for experimentation with new algorithms that arise from exploiting structural properties of numerical codes in a source transformation context. OpenAD is being designed to fill this need by providing a framework that allows for relative ease in the implementation of algorithms that operate on a representation of the numerical kernel of a program. Language independence is achieved by using an intermediate XML format and the abstraction of common compiler analyses in Open-Analysis. The intermediate format is mapped to concrete programming languages via two front/back end combinations. The design allows for reuse and combination of already implemented algorithms. We describe the set of algorithms and basic functionality currently implemented in OpenAD and explain the necessary steps to add a new algorithm to the framework.
Grid fill algorithm for vector graphics render on mobile devices
NASA Astrophysics Data System (ADS)
Zhang, Jixian; Yue, Kun; Yuan, Guowu; Zhang, Binbin
2015-12-01
The performance of vector graphics render has always been one of the key elements in mobile devices and the most important step to improve the performance is to enhance the efficiency of polygon fill algorithms. In this paper, we proposed a new and more efficient polygon fill algorithm based on the scan line algorithm and Grid Fill Algorithm (GFA). First, we elaborated the GFA through solid fill. Second, we described the techniques for implementing antialiasing and self-intersection polygon fill with GFA. Then, we discussed the implementation of GFA based on the gradient fill. Generally, compared to other fill algorithms, GFA has better performance and achieves faster fill speed, which is specifically consistent with the inherent characteristics of mobile devices. Experimental results show that better fill effects can be achieved by using GFA.
Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.
Ricci, E; Di Domenico, S; Cianca, E; Rossi, T
2015-01-01
Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.
Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.
Ricci, E; Di Domenico, S; Cianca, E; Rossi, T
2015-01-01
Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity. PMID:26736661
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Liu, Kun-Shia; Cheng, Ying-Yao; Chen, Yi-Ling; Wu, Yuh-Yih
2009-01-01
This study used nationwide data from the Taiwan Education Panel Survey (TEPS) to examine the longitudinal effects of educational expectations and achievement attributions on the academic achievements of adolescents. The sample included 2,000 Taiwanese secondary school students, each of whom completed three waves of questionnaires and cognitive tests: the first in grade 7 (in 2001), the second in grade 9 (in 2003), and the third in grade 11 (in 2005). Through multilevel longitudinal analysis, the results showed: (1) educational expectations accounted for a moderate amount of the variance in academic achievements; (2) students with high educational expectations and effort attribution exhibited higher growth rates in their academic achievements; and (3) studentswith lower educational expectations and those attributing success to others showed significantly fewer academic achievements and significantly lower growth rates in such achievements. The results demonstrated that adolescents' educational expectations and achievement attributions play crucial roles in the long-term course of academic accomplishments. Implications for educational practice and further studies are also discussed.
ERIC Educational Resources Information Center
Virginia State Council of Higher Education, Richmond.
For the past 7 years, the State Council of Higher Education has published a report of selected characteristics and degree programs for Virginia's state-supported colleges and universities. By combining data from independent institutions with information collected from the state-supported colleges, a more comprehensive picture of higher education…
Minorities in Higher Education.
ERIC Educational Resources Information Center
Justiz, Manuel J., Ed.; And Others
This book presents 19 papers on efforts to increase the participation of members of minority groups in higher education. The papers are: (1) "Demographic Trends and the Challenges to American Higher Education" (Manuel Justiz); (2) "Three Realities: Minority Life in the United States--The Struggle for Economic Equity (adapted by Don M. Blandin);…
Hypermedia and Higher Education.
ERIC Educational Resources Information Center
Lemke, Jay L.
1993-01-01
Discusses changes in higher education that are resulting from the use of hypermedia. Topics addressed include the structure of traditional texts; a distributed model for academic communication; independent learning as a model for higher education; skills for hypermedia literacy; database searching; information retrieval; authoring skills; design…
ERIC Educational Resources Information Center
Hayes, Dianne
2012-01-01
Higher education institutions are in the battle of a lifetime as they are coping with political and economic uncertainties, threats to federal aid, declining state support, higher tuition rates and increased competition from for-profit institutions. Amid all these challenges, these institutions are pressed to keep up with technological demands,…
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2009-01-01
This volume begins with an essay by Noelle McAfee, a contributor who is familiar to readers of Higher Education Exchange (HEX). She reiterates Mathews' argument regarding the disconnect between higher education's sense of engagement and the public's sense of engagement, and suggests a way around the epistemological conundrum of "knowledge produced…
Higher Education Exchange, 2009
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2009-01-01
This volume begins with an essay by Noelle McAfee, a contributor who is familiar to readers of Higher Education Exchange (HEX). She reiterates Kettering's president David Mathews' argument regarding the disconnect between higher education's sense of engagement and the public's sense of engagement, and suggests a way around the epistemological…
ERIC Educational Resources Information Center
Bismarck State Coll., ND.
This document outlines the curriculum plan for the one-semester vocational-technical training component of PHOENIX: A Model Program for Higher-Wage Potential Careers offered by Bismarck State College (North Dakota) which prepares and/or retrains individuals for higher-wage technical careers. The comprehensive model for the program is organized…
Higher Education Exchange 2006
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2006-01-01
Contributors to this issue of the Higher Education Exchange debate the issues around knowledge production, discuss the acquisition of deliberative skills for democracy, and examine how higher education prepares, or does not prepare, students for citizenship roles. Articles include: (1) "Foreword" (Deborah Witte); (2) "Knowledge, Judgment and…
Reimagining Christian Higher Education
ERIC Educational Resources Information Center
Hulme, E. Eileen; Groom, David E., Jr.; Heltzel, Joseph M.
2016-01-01
The challenges facing higher education continue to mount. The shifting of the U.S. ethnic and racial demographics, the proliferation of advanced digital technologies and data, and the move from traditional degrees to continuous learning platforms have created an unstable environment to which Christian higher education must adapt in order to remain…
ERIC Educational Resources Information Center
Bank, Barbara J., Ed.
2011-01-01
This comprehensive, encyclopedic review explores gender and its impact on American higher education across historical and cultural contexts. Challenging recent claims that gender inequities in U.S. higher education no longer exist, the contributors--leading experts in the field--reveal the many ways in which gender is embedded in the educational…
Mathematics anxiety and mathematics achievement
NASA Astrophysics Data System (ADS)
Sherman, Brian F.; Wither (Post.), David P.
2003-09-01
This paper is a distillation of the major result from the 1998 Ph.D. thesis of the late David Wither. It details a longitudinal study over five years of the relationship between mathematics anxiety and mathematics achievement. It starts from the already well documented negative correlation between the two, and seeks to establish one of the three hypotheses—that mathematics anxiety causes an impairment of mathematics achievement; that lack of mathematics achievement causes mathematics anxiety; or that there is a third underlying cause of the two.
NASA Astrophysics Data System (ADS)
Gandomi, A. H.; Yang, X.-S.; Talatahari, S.; Alavi, A. H.
2013-01-01
A recently developed metaheuristic optimization algorithm, firefly algorithm (FA), mimics the social behavior of fireflies based on the flashing and attraction characteristics of fireflies. In the present study, we will introduce chaos into FA so as to increase its global search mobility for robust global optimization. Detailed studies are carried out on benchmark problems with different chaotic maps. Here, 12 different chaotic maps are utilized to tune the attractive movement of the fireflies in the algorithm. The results show that some chaotic FAs can clearly outperform the standard FA.
NASA Technical Reports Server (NTRS)
Chan, Hak-Wai; Yan, Tsun-Yee
1989-01-01
Algorithm developed for optimal routing of packets of data along links of multilink, multinode digital communication network. Algorithm iterative and converges to cost-optimal assignment independent of initial assignment. Each node connected to other nodes through links, each containing number of two-way channels. Algorithm assigns channels according to message traffic leaving and arriving at each node. Modified to take account of different priorities among packets belonging to different users by using different delay constraints or imposing additional penalties via cost function.
Parallelization of the Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this study the following questions are addressed. Is it possible to improve the parallelization efficiency of the Thomas algorithm? How should the Thomas algorithm be formulated in order to get solved lines that are used as data for other computational tasks while processors are idle? To answer these questions, two-step pipelined algorithms (PAs) are introduced formally. It is shown that the idle processor time is invariant with respect to the order of backward and forward steps in PAs starting from one outermost processor. The advantage of PAs starting from two outermost processors is small. Versions of the pipelined Thomas algorithms considered here fall into the category of PAs. These results show that the parallelization efficiency of the Thomas algorithm cannot be improved directly. However, the processor idle time can be used if some data has been computed by the time processors become idle. To achieve this goal the Immediate Backward pipelined Thomas Algorithm (IB-PTA) is developed in this article. The backward step is computed immediately after the forward step has been completed for the first portion of lines. This enables the completion of the Thomas algorithm for some of these lines before processors become idle. An algorithm for generating a static processor schedule recursively is developed. This schedule is used to switch between forward and backward computations and to control communications between processors. The advantage of the IB-PTA over the basic PTA is the presence of solved lines, which are available for other computations, by the time processors become idle.
Using Design To Achieve Sustainability
Sustainability is defined as meeting the needs of this generation without compromising the ability of future generations to meet their needs. This is a conditional statement that places the responsibility for achieving sustainability squarely in hands of designers and planners....
Childhood Obesity and Cognitive Achievement.
Black, Nicole; Johnston, David W; Peeters, Anna
2015-09-01
Obese children tend to perform worse academically than normal-weight children. If poor cognitive achievement is truly a consequence of childhood obesity, this relationship has significant policy implications. Therefore, an important question is to what extent can this correlation be explained by other factors that jointly determine obesity and cognitive achievement in childhood? To answer this question, we exploit a rich longitudinal dataset of Australian children, which is linked to national assessments in math and literacy. Using a range of estimators, we find that obesity and body mass index are negatively related to cognitive achievement for boys but not girls. This effect cannot be explained by sociodemographic factors, past cognitive achievement or unobserved time-invariant characteristics and is robust to different measures of adiposity. Given the enormous importance of early human capital development for future well-being and prosperity, this negative effect for boys is concerning and warrants further investigation. PMID:26123250
Mastery Achievement of Intellectual Skills.
ERIC Educational Resources Information Center
Trembath, Richard J.; White, Richard T.
1979-01-01
Mastery learning techniques were improved through mathematics instruction based on a validated learning hierarchy, presenting tasks in a sequence consistent with the requirements of the hierarchy, and requiring learners to demonstrate achievement before being allowed to proceed. (Author/GDC)
Achieving Standards through Environmental Education.
ERIC Educational Resources Information Center
Kaspar, Mike
1999-01-01
Most states do not have the time or resources to develop environmental education standards from scratch. Highlights the role that environmental education and its interdisciplinary nature can play in helping students achieve. (DDR)
Childhood Obesity and Cognitive Achievement.
Black, Nicole; Johnston, David W; Peeters, Anna
2015-09-01
Obese children tend to perform worse academically than normal-weight children. If poor cognitive achievement is truly a consequence of childhood obesity, this relationship has significant policy implications. Therefore, an important question is to what extent can this correlation be explained by other factors that jointly determine obesity and cognitive achievement in childhood? To answer this question, we exploit a rich longitudinal dataset of Australian children, which is linked to national assessments in math and literacy. Using a range of estimators, we find that obesity and body mass index are negatively related to cognitive achievement for boys but not girls. This effect cannot be explained by sociodemographic factors, past cognitive achievement or unobserved time-invariant characteristics and is robust to different measures of adiposity. Given the enormous importance of early human capital development for future well-being and prosperity, this negative effect for boys is concerning and warrants further investigation.
Color sorting algorithm based on K-means clustering algorithm
NASA Astrophysics Data System (ADS)
Zhang, BaoFeng; Huang, Qian
2009-11-01
In the process of raisin production, there were a variety of color impurities, which needs be removed effectively. A new kind of efficient raisin color-sorting algorithm was presented here. First, the technology of image processing basing on the threshold was applied for the image pre-processing, and then the gray-scale distribution characteristic of the raisin image was found. In order to get the chromatic aberration image and reduce some disturbance, we made the flame image subtraction that the target image data minus the background image data. Second, Haar wavelet filter was used to get the smooth image of raisins. According to the different colors and mildew, spots and other external features, the calculation was made to identify the characteristics of their images, to enable them to fully reflect the quality differences between the raisins of different types. After the processing above, the image were analyzed by K-means clustering analysis method, which can achieve the adaptive extraction of the statistic features, in accordance with which, the image data were divided into different categories, thereby the categories of abnormal colors were distinct. By the use of this algorithm, the raisins of abnormal colors and ones with mottles were eliminated. The sorting rate was up to 98.6%, and the ratio of normal raisins to sorted grains was less than one eighth.
NASA Astrophysics Data System (ADS)
Li, Jinsha; Li, Junmin
2016-07-01
In this paper, the adaptive fuzzy iterative learning control scheme is proposed for coordination problems of Mth order (M ≥ 2) distributed multi-agent systems. Every follower agent has a higher order integrator with unknown nonlinear dynamics and input disturbance. The dynamics of the leader are a higher order nonlinear systems and only available to a portion of the follower agents. With distributed initial state learning, the unified distributed protocols combined time-domain and iteration-domain adaptive laws guarantee that the follower agents track the leader uniformly on [0, T]. Then, the proposed algorithm extends to achieve the formation control. A numerical example and a multiple robotic system are provided to demonstrate the performance of the proposed approach.
Neural Network Algorithm for Particle Loading
J. L. V. Lewandowski
2003-04-25
An artificial neural network algorithm for continuous minimization is developed and applied to the case of numerical particle loading. It is shown that higher-order moments of the probability distribution function can be efficiently renormalized using this technique. A general neural network for the renormalization of an arbitrary number of moments is given.
Ethiopian New Public Universities: Achievements, Challenges and Illustrative Case Studies
ERIC Educational Resources Information Center
van Deuren, Rita; Kahsu, Tsegazeab; Mohammed, Seid; Woldie, Wondimu
2016-01-01
Purpose: This paper aims to analyze and illustrate achievements and challenges of Ethiopian higher education, both at the system level and at the level of new public universities. Design/methodology/approach: Achievements and challenges at the system level are based on literature review and secondary data. Illustrative case studies are based on…
Effective Practices: The Role of Accreditation in Student Achievement
ERIC Educational Resources Information Center
Council for Higher Education Accreditation, 2010
2010-01-01
The Council for Higher Education Accreditation (CHEA) has focused on the role of accreditation in student achievement since the publication of its 2001 "Accreditation and Student Learning Outcomes: A Proposed Point of Departure." Student achievement has remained central to CHEA research and policy analysis, as well as interviews and surveys with…
Rural Student Achievement: Elements for Consideration. ERIC Digest.
ERIC Educational Resources Information Center
Edington, Everett D.; Koehler, Lyle
Current educational research efforts are examining rural/urban differences in achievement, appropriateness of rural/urban achievement measures, effects of parents and community on the attainment of rural students, and how well rural students succeed in higher education. To accurately assess the small, rural school's impact on students, rural-urban…
Flipping College Algebra: Effects on Student Engagement and Achievement
ERIC Educational Resources Information Center
Ichinose, Cherie; Clinkenbeard, Jennifer
2016-01-01
This study compared student engagement and achievement levels between students enrolled in a traditional college algebra lecture course and students enrolled in a "flipped" course. Results showed that students in the flipped class had consistently higher levels of achievement throughout the course than did students in the traditional…
Science Achievement, Class Size, and Demographics: The Debate Continues.
ERIC Educational Resources Information Center
Miller-Whitehead, Marie
2001-01-01
Examined the relationship between school system financial and demographic data and student achievement in the science section of the 1998 Tennessee statewide Terra Nova tests. Results indicate that while many schools had science scale score achievement higher than expected based on system demographics, others should examine a variety of…
Scaling up multiphoton neural scanning: the SSA algorithm.
Schuck, Renaud; Annecchino, Luca A; Schultz, Simon R
2014-01-01
In order to reverse-engineer the information processing capabilities of the cortical circuit, we need to densely sample neural circuit; it may be necessary to sample the activity of thousands of neurons simultaneously. Frame scanning techniques do not scale well in this regard, due to the time "wasted" scanning extracellular space. For scanners in which inertia can be neglected, path length minimization strategies enable large populations to be imaged at relatively high sampling rates. However, in a standard multiphoton microscope, the scanners responsible for beam deflection are inertial, indicating that an optimal solution should take rotor and mirror momentum into account. We therefore characterized the galvanometric scanners of a commercial multiphoton microscope, in order to develop and validate a MATLAB model of microscope scanning dynamics. We tested the model by simulating scan paths across pseudo-randomly positioned neuronal populations of differing neuronal density and field of view. This model motivated the development of a novel scanning algorithm, Adaptive Spiral Scanning (SSA), in which the radius of a circular trajectory is constantly updated such that it follows a spiral trajectory scanning all the cells. Due to the kinematic efficiency of near-circular trajectories, this algorithm achieves higher sampling rates than shortest path approaches, while retaining a relatively efficient coverage fraction in comparison to raster or resonance based frame-scanning approaches. PMID:25570582
An enhanced algorithm for multiple sequence alignment of protein sequences using genetic algorithm
Kumar, Manish
2015-01-01
One of the most fundamental operations in biological sequence analysis is multiple sequence alignment (MSA). The basic of multiple sequence alignment problems is to determine the most biologically plausible alignments of protein or DNA sequences. In this paper, an alignment method using genetic algorithm for multiple sequence alignment has been proposed. Two different genetic operators mainly crossover and mutation were defined and implemented with the proposed method in order to know the population evolution and quality of the sequence aligned. The proposed method is assessed with protein benchmark dataset, e.g., BALIBASE, by comparing the obtained results to those obtained with other alignment algorithms, e.g., SAGA, RBT-GA, PRRP, HMMT, SB-PIMA, CLUSTALX, CLUSTAL W, DIALIGN and PILEUP8 etc. Experiments on a wide range of data have shown that the proposed algorithm is much better (it terms of score) than previously proposed algorithms in its ability to achieve high alignment quality. PMID:27065770
Sustainability and Higher Education
ERIC Educational Resources Information Center
Hales, David
2008-01-01
People face four fundamental dilemmas, which are essentially moral choices: (1) alleviating poverty; (2) removing the gap between rich and poor; (3) controlling the use of violence for political ends; and (4) changing the patterns of production and consumption and achieving the transition to sustainability. The world in which future generations…
Achieving safe autonomous landings on Mars using vision-based approaches
NASA Technical Reports Server (NTRS)
Pien, Homer
1992-01-01
Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.
Achieving safe autonomous landings on Mars using vision-based approaches
NASA Astrophysics Data System (ADS)
Pien, Homer
1992-03-01
Autonomous landing capabilities will be critical to the success of planetary exploration missions, and in particular to the exploration of Mars. Past studies have indicated that the probability of failure associated with open-loop landings is unacceptably high. Two approaches to achieving autonomous landings with higher probabilities of success are currently under analysis. If a landing site has been certified as hazard free, then navigational aids can be used to facilitate a precision landing. When only limited surface knowledge is available and landing areas cannot be certified as hazard free, then a hazard detection and avoidance approach can be used, in which the vehicle selects hazard free landing sites in real-time during its descent. Issues pertinent to both approaches, including sensors and algorithms, are presented. Preliminary results indicate that one promising approach to achieving high accuracy precision landing is to correlate optical images of the terrain acquired during the terminal descent phase with a reference image. For hazard detection scenarios, a sensor suite comprised of a passive intensity sensor and a laser ranging sensor appears promising as a means of achieving robust landings.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
Staged optimization algorithms based MAC dynamic bandwidth allocation for OFDMA-PON
NASA Astrophysics Data System (ADS)
Liu, Yafan; Qian, Chen; Cao, Bingyao; Dun, Han; Shi, Yan; Zou, Junni; Lin, Rujian; Wang, Min
2016-06-01
Orthogonal frequency division multiple access passive optical network (OFDMA-PON) has being considered as a promising solution for next generation PONs due to its high spectral efficiency and flexible bandwidth allocation scheme. In order to take full advantage of these merits of OFDMA-PON, a high-efficiency medium access control (MAC) dynamic bandwidth allocation (DBA) scheme is needed. In this paper, we propose two DBA algorithms which can act on two different stages of a resource allocation process. To achieve higher bandwidth utilization and ensure the equity of ONUs, we propose a DBA algorithm based on frame structure for the stage of physical layer mapping. Targeting the global quality of service (QoS) of OFDMA-PON, we propose a full-range DBA algorithm with service level agreement (SLA) and class of service (CoS) for the stage of bandwidth allocation arbitration. The performance of the proposed MAC DBA scheme containing these two algorithms is evaluated using numerical simulations. Simulations of a 15 Gbps network with 1024 sub-carriers and 32 ONUs demonstrate the maximum network throughput of 14.87 Gbps and the maximum packet delay of 1.45 ms for the highest priority CoS under high load condition.
A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typicallymore » appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).« less
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
Lyakh, Dmitry I.
2015-01-05
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the na ve scattering algorithm (no memory access optimization). Furthermore, the tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
A novel LTE scheduling algorithm for green technology in smart grid.
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively.
A novel LTE scheduling algorithm for green technology in smart grid.
Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid
2015-01-01
Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703
An efficient tensor transpose algorithm for multicore CPU, Intel Xeon Phi, and NVidia Tesla GPU
NASA Astrophysics Data System (ADS)
Lyakh, Dmitry I.
2015-04-01
An efficient parallel tensor transpose algorithm is suggested for shared-memory computing units, namely, multicore CPU, Intel Xeon Phi, and NVidia GPU. The algorithm operates on dense tensors (multidimensional arrays) and is based on the optimization of cache utilization on x86 CPU and the use of shared memory on NVidia GPU. From the applied side, the ultimate goal is to minimize the overhead encountered in the transformation of tensor contractions into matrix multiplications in computer implementations of advanced methods of quantum many-body theory (e.g., in electronic structure theory and nuclear physics). A particular accent is made on higher-dimensional tensors that typically appear in the so-called multireference correlated methods of electronic structure theory. Depending on tensor dimensionality, the presented optimized algorithms can achieve an order of magnitude speedup on x86 CPUs and 2-3 times speedup on NVidia Tesla K20X GPU with respect to the naïve scattering algorithm (no memory access optimization). The tensor transpose routines developed in this work have been incorporated into a general-purpose tensor algebra library (TAL-SH).
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
The Superior Lambert Algorithm
NASA Astrophysics Data System (ADS)
der, G.
2011-09-01
Lambert algorithms are used extensively for initial orbit determination, mission planning, space debris correlation, and missile targeting, just to name a few applications. Due to the significance of the Lambert problem in Astrodynamics, Gauss, Battin, Godal, Lancaster, Gooding, Sun and many others (References 1 to 15) have provided numerous formulations leading to various analytic solutions and iterative methods. Most Lambert algorithms and their computer programs can only work within one revolution, break down or converge slowly when the transfer angle is near zero or 180 degrees, and their multi-revolution limitations are either ignored or barely addressed. Despite claims of robustness, many Lambert algorithms fail without notice, and the users seldom have a clue why. The DerAstrodynamics lambert2 algorithm, which is based on the analytic solution formulated by Sun, works for any number of revolutions and converges rapidly at any transfer angle. It provides significant capability enhancements over every other Lambert algorithm in use today. These include improved speed, accuracy, robustness, and multirevolution capabilities as well as implementation simplicity. Additionally, the lambert2 algorithm provides a powerful tool for solving the angles-only problem without artificial singularities (pointed out by Gooding in Reference 16), which involves 3 lines of sight captured by optical sensors, or systems such as the Air Force Space Surveillance System (AFSSS). The analytic solution is derived from the extended Godal’s time equation by Sun, while the iterative method of solution is that of Laguerre, modified for robustness. The Keplerian solution of a Lambert algorithm can be extended to include the non-Keplerian terms of the Vinti algorithm via a simple targeting technique (References 17 to 19). Accurate analytic non-Keplerian trajectories can be predicted for satellites and ballistic missiles, while performing at least 100 times faster in speed than most
Fast Intersection Algorithms for Sorted Sequences
NASA Astrophysics Data System (ADS)
Baeza-Yates, Ricardo; Salinger, Alejandro
This paper presents and analyzes a simple intersection algorithm for sorted sequences that is fast on average. It is related to the multiple searching problem and to merging. We present the worst and average case analysis, showing that in the former, the complexity nicely adapts to the smallest list size. In the latter case, it performs less comparisons than the total number of elements on both inputs, n and m, when n = αm (α> 1), achieving O(m log(n/m)) complexity. The algorithm is motivated by its application to fast query processing in Web search engines, where large intersections, or differences, must be performed fast. In this case we experimentally show that the algorithm is faster than previous solutions.
The fuzzy C spherical shells algorithm - A new approach
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Nasraoui, Olfa; Frigui, Hichem
1992-01-01
The fuzzy c spherical shells (FCSS) algorithm is specially designed to search for clusters that can be described by circular arcs or, more generally, by shells of hyperspheres. In this paper, a new approach to the FCSS algorithm is presented. This algorithm is computationally and implementationally simpler than other clustering algorithms that have been suggested for this purpose. An unsupervised algorithm which automatically finds the optimum number of clusters is also proposed. This algorithm can be used when the number of clusters is not known. It uses a cluster validity measure to identify good clusters, merges all compatible clusters, and eliminates spurious clusters to achieve the final result. Experimental results on several data sets are presented.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A.M.; Cohen, B.I.; Caflisch, R.E.; Rosin, M.S.; Ricketson, L.F.
2013-06-01
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler–Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt{sup 1/2})] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering if and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler–Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. This method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; Rosin, M. S.; Ricketson, L. F.
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt^{1/2})] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering if and only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.
Higher-order time integration of Coulomb collisions in a plasma using Langevin equations
Dimits, A. M.; Cohen, B. I.; Caflisch, R. E.; Rosin, M. S.; Ricketson, L. F.
2013-02-08
The extension of Langevin-equation Monte-Carlo algorithms for Coulomb collisions from the conventional Euler-Maruyama time integration to the next higher order of accuracy, the Milstein scheme, has been developed, implemented, and tested. This extension proceeds via a formulation of the angular scattering directly as stochastic differential equations in the two fixed-frame spherical-coordinate velocity variables. Results from the numerical implementation show the expected improvement [O(Δt) vs. O(Δt1/2)] in the strong convergence rate both for the speed |v| and angular components of the scattering. An important result is that this improved convergence is achieved for the angular component of the scattering if andmore » only if the “area-integral” terms in the Milstein scheme are included. The resulting Milstein scheme is of value as a step towards algorithms with both improved accuracy and efficiency. These include both algorithms with improved convergence in the averages (weak convergence) and multi-time-level schemes. The latter have been shown to give a greatly reduced cost for a given overall error level when compared with conventional Monte-Carlo schemes, and their performance is improved considerably when the Milstein algorithm is used for the underlying time advance versus the Euler-Maruyama algorithm. A new method for sampling the area integrals is given which is a simplification of an earlier direct method and which retains high accuracy. Lastly, this method, while being useful in its own right because of its relative simplicity, is also expected to considerably reduce the computational requirements for the direct conditional sampling of the area integrals that is needed for adaptive strong integration.« less
NASA Astrophysics Data System (ADS)
Baas, Nils A.
2016-08-01
In this paper, we discuss various philosophical aspects of the hyperstructure concept extending networks and higher categories. By this discussion, we hope to pave the way for applications and further developments of the mathematical theory of hyperstructures.
Forecasting Higher Education's Future.
ERIC Educational Resources Information Center
Boyken, Don; Buck, Tina S.; Kollie, Ellen; Przyborowski, Danielle; Rondinelli, Joseph A.; Hunter, Jeff; Hanna, Jeff
2003-01-01
Offers predictions on trends in higher education to accommodate changing needs, lower budgets, and increased enrollment. They involve campus construction, security, administration, technology, interior design, athletics, and transportation. (EV)
International Higher Education Bibliography.
ERIC Educational Resources Information Center
Lulat, Y. G-M.
1988-01-01
One in a series of bibliographies of articles in international higher education journals lists items on a variety of administrative, financial, faculty, student, curricular, and related issues. Articles on specific geographic regions are categorized separately. (MSE)
Hybrid protection algorithms based on game theory in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming
2011-12-01
With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.
Khan, Rao F. Villarreal-Barajas, Eduardo; Lau, Harold; Liu, Hong-Wei
2014-04-01
Stereotactic body radiotherapy (SBRT) is a curative regimen that uses hypofractionated radiation-absorbed dose to achieve a high degree of local control in early stage non–small cell lung cancer (NSCLC). In the presence of heterogeneities, the dose calculation for the lungs becomes challenging. We have evaluated the dosimetric effect of the recently introduced advanced dose-calculation algorithm, Acuros XB (AXB), for SBRT of NSCLC. A total of 97 patients with early-stage lung cancer who underwent SBRT at our cancer center during last 4 years were included. Initial clinical plans were created in Aria Eclipse version 8.9 or prior, using 6 to 10 fields with 6-MV beams, and dose was calculated using the anisotropic analytic algorithm (AAA) as implemented in Eclipse treatment planning system. The clinical plans were recalculated in Aria Eclipse 11.0.21 using both AAA and AXB algorithms. Both sets of plans were normalized to the same prescription point at the center of mass of the target. A secondary monitor unit (MU) calculation was performed using commercial program RadCalc for all of the fields. For the planning target volumes ranging from 19 to 375 cm{sup 3}, a comparison of MUs was performed for both set of algorithms on field and plan basis. In total, variation of MUs for 677 treatment fields was investigated in terms of equivalent depth and the equivalent square of the field. Overall, MUs required by AXB to deliver the prescribed dose are on an average 2% higher than AAA. Using a 2-tailed paired t-test, the MUs from the 2 algorithms were found to be significantly different (p < 0.001). The secondary independent MU calculator RadCalc underestimates the required MUs (on an average by 4% to 5%) in the lung relative to either of the 2 dose algorithms.
On the Achievable Throughput Over TVWS Sensor Networks.
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
On the Achievable Throughput Over TVWS Sensor Networks.
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-03-30
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis.
On the Achievable Throughput Over TVWS Sensor Networks
Caleffi, Marcello; Cacciapuoti, Angela Sara
2016-01-01
In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
EDITORIAL: Deeper, broader, higher, better?
NASA Astrophysics Data System (ADS)
Dobson, Ken
1998-07-01
Honorary Editor The standard of educational achievement in England and Wales is frequently criticized, and it seems to be an axiom of government that schools and teachers need to be shaken up, kept on a tight rein, copiously inspected, shamed and blamed as required: in general, subjected to the good old approach of: ' Find out what Johnny is doing and tell him to stop.' About the only exception to this somewhat severe attitude is at A-level, where the standard is simply golden. Often, comparisons are made between the performance of, say, English children and that of their coevals in other countries, with different customs, systems, aims and languages. But there has been a recent comparison of standards at A-level with a non-A-level system of pre-university education, in an English-speaking country that both sends students to English universities and accepts theirs into its own, and is, indeed, represented in the UK government at well above the level expected from its ethnical weighting in the population. This semi-foreign country is Scotland. The conclusions of the study are interesting. Scotland has had its own educational system, with `traditional breadth', and managed to escape much of the centralized authoritarianism that we have been through south of the border. It is interesting to note that, while for the past dozen years or so the trend in A-level Physics entries has been downwards, there has been an increase in the take-up of Scottish `Highers'. Highers is a one-year course. Is its popularity due to its being easier than A-level? Scottish students keen enough to do more can move on to the Certificate of Sixth Year Studies, and will shortly be able to upgrade a Higher Level into an Advanced Higher Level. A comparability study [ Comparability Study of Scottish Qualifications and GCE Advanced Levels: Report on Physics January 1998 (free from SQA)] was carried out by the Scottish Qualifications Authority (SQA) with the aim (amongst others) of helping
Formally Verified Practical Algorithms for Recovery from Loss of Separation
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Munoz, Caesar A.
2009-01-01
In this paper, we develop and formally verify practical algorithms for recovery from loss of separation. The formal verification is performed in the context of a criteria-based framework. This framework provides rigorous definitions of horizontal and vertical maneuver correctness that guarantee divergence and achieve horizontal and vertical separation. The algorithms are shown to be independently correct, that is, separation is achieved when only one aircraft maneuvers, and implicitly coordinated, that is, separation is also achieved when both aircraft maneuver. In this paper we improve the horizontal criteria over our previous work. An important benefit of the criteria approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).
Feature extraction and classification algorithms for high dimensional data
NASA Technical Reports Server (NTRS)
Lee, Chulhee; Landgrebe, David
1993-01-01
Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized
Coupled and decoupled algorithms for semiconductor simulation
NASA Astrophysics Data System (ADS)
Kerkhoven, T.
1985-12-01
Algorithms for the numerical simulation are analyzed by computers of the steady state behavior of MOSFETs. The discretization and linearization of the nonlinear partial differential equations as well as the solution of the linearized systems are treated systematically. Thus we generate equations which do not exceed the floating point representations of modern computers and for which charge is conserved while appropriate maximum principles are preserved. A typical decoupling algorithm of the solution of the system of pde is analyzed as a fixed point mapping T. Bounds exist on the components of the solution and for sufficiently regular boundary geometries higher regularity of the derivatives as well. T is a contraction for sufficiently small variation of the boundary data. It therefore follows that under those conditions the decoupling algorithm coverges to a unique fixed point which is the weak solution to the system of pdes in divergence form. A discrete algorithm which corresponds to a possible computer code is shown to converge if the discretizaion of the pde preserves the regularity properties mentioned above. A stronger convergence result is obtained by employing the higher regularity for enforcing the weak formulations of the pde more strongly. The execution speed of a modification of Newton's method, two versions of a decoupling approach and a new mixed solution algorithm are compared for a range of problems. The asymptotic complexity of the solution of the linear systems is identical for these approaches in the context of sparse direct solvers if the ordering is done in an optimal way.
ERIC Educational Resources Information Center
Thornton, Tim
2014-01-01
This study is on how one higher education institution included the United Kingdom Professional Standards Framework, developed by the Higher Education Academy, as a strategic benchmark for teaching and learning. The article outlines the strategies used to engage all academic (and academic-related) staff in achieving relevant professional…
Surface solar irradiance from SCIAMACHY measurements: algorithm and validation
NASA Astrophysics Data System (ADS)
Wang, P.; Stammes, P.; Mueller, R.
2011-05-01
Broadband surface solar irradiances (SSI) are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY) satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from the Oxygen A band) SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code) algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against globally distributed BSRN (Baseline Surface Radiation Network) measurements and compared with ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset) surface shortwave downwelling fluxes (SDF). For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is -4 W m-2 (-1 %) with a standard deviation of 101 W m-2 (20 %). The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than -12 W m-2 (-2 %) for every month in 2006 and the standard deviation is 62 W m-2 (12 %). The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.
Surface solar irradiance from SCIAMACHY measurements: algorithm and validation
NASA Astrophysics Data System (ADS)
Wang, P.; Stammes, P.; Mueller, R.
2011-02-01
Broadband surface solar irradiances (SSI) are, for the first time, derived from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY) satellite measurements. The retrieval algorithm, called FRESCO (Fast REtrieval Scheme for Clouds from Oxygen A band) SSI, is similar to the Heliosat method. In contrast to the standard Heliosat method, the cloud index is replaced by the effective cloud fraction derived from the FRESCO cloud algorithm. The MAGIC (Mesoscale Atmospheric Global Irradiance Code) algorithm is used to calculate clear-sky SSI. The SCIAMACHY SSI product is validated against the globally distributed BSRN (Baseline Surface Radiation Network) measurements and compared with the ISCCP-FD (International Satellite Cloud Climatology Project Flux Dataset) surface shortwave downwelling fluxes (SDF). For one year of data in 2008, the mean difference between the instantaneous SCIAMACHY SSI and the hourly mean BSRN global irradiances is -4 W m-2(-1%) with a standard deviation of 101 W m-2 (20%). The mean difference between the globally monthly mean SCIAMACHY SSI and ISCCP-FD SDF is less than -12 W m-2 (-2%) for every month in 2006 and the standard deviation is 62 W m-2 (12%). The correlation coefficient is 0.93 between SCIAMACHY SSI and BSRN global irradiances and is greater than 0.96 between SCIAMACHY SSI and ISCCP-FD SDF. The evaluation results suggest that the SCIAMACHY SSI product achieves similar mean bias error and root mean square error as the surface solar irradiances derived from polar orbiting satellites with higher spatial resolution.
Improving Student Achievement in Math and Science
NASA Technical Reports Server (NTRS)
Sullivan, Nancy G.; Hamsa, Irene Schulz; Heath, Panagiota; Perry, Robert; White, Stacy J.
1998-01-01
As the new millennium approaches, a long anticipated reckoning for the education system of the United States is forthcoming, Years of school reform initiatives have not yielded the anticipated results. A particularly perplexing problem involves the lack of significant improvement of student achievement in math and science. Three "Partnership" projects represent collaborative efforts between Xavier University (XU) of Louisiana, Southern University of New Orleans (SUNO), Mississippi Valley State University (MVSU), and the National Aeronautics and Space Administration (NASA), Stennis Space Center (SSC), to enhance student achievement in math and science. These "Partnerships" are focused on students and teachers in federally designated rural and urban empowerment zones and enterprise communities. The major goals of the "Partnerships" include: (1) The identification and dissemination of key indices of success that account for high performance in math and science; (2) The education of pre-service and in-service secondary teachers in knowledge, skills, and competencies that enhance the instruction of high school math and science; (3) The development of faculty to enhance the quality of math and science courses in institutions of higher education; and (4) The incorporation of technology-based instruction in institutions of higher education. These goals will be achieved by the accomplishment of the following objectives: (1) Delineate significant ?best practices? that are responsible for enhancing student outcomes in math and science; (2) Recruit and retain pre-service teachers with undergraduate degrees in Biology, Math, Chemistry, or Physics in a graduate program, culminating with a Master of Arts in Curriculum and Instruction; (3) Provide faculty workshops and opportunities for travel to professional meetings for dissemination of NASA resources information; (4) Implement methodologies and assessment procedures utilizing performance-based applications of higher order
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Time-step Considerations in Particle Simulation Algorithms for Coulomb Collisions in Plasmas
Cohen, B I; Dimits, A; Friedman, A; Caflisch, R
2009-10-29
The accuracy of first-order Euler and higher-order time-integration algorithms for grid-based Langevin equations collision models in a specific relaxation test problem is assessed. We show that statistical noise errors can overshadow time-step errors and argue that statistical noise errors can be conflated with time-step effects. Using a higher-order integration scheme may not achieve any benefit in accuracy for examples of practical interest. We also investigate the collisional relaxation of an initial electron-ion relative drift and the collisional relaxation to a resistive steady-state in which a quasi-steady current is driven by a constant applied electric field, as functions of the time step used to resolve the collision processes using binary and grid-based, test-particle Langevin equations models. We compare results from two grid-based Langevin equations collision algorithms to results from a binary collision algorithm for modeling electronion collisions. Some guidance is provided regarding how large a time step can be used compared to the inverse of the characteristic collision frequency for specific relaxation processes.
RNA-RNA interaction prediction using genetic algorithm
2014-01-01
Background RNA-RNA interaction plays an important role in the regulation of gene expression and cell development. In this process, an RNA molecule prohibits the translation of another RNA molecule by establishing stable interactions with it. In the RNA-RNA interaction prediction problem, two RNA sequences are given as inputs and the goal is to find the optimal secondary structure of two RNAs and between them. Some different algorithms have been proposed to predict RNA-RNA interaction structure. However, most of them suffer from high computational time. Results In this paper, we introduce a novel genetic algorithm called GRNAs to predict the RNA-RNA interaction. The proposed algorithm is performed on some standard datasets with appropriate accuracy and lower time complexity in comparison to the other state-of-the-art algorithms. In the proposed algorithm, each individual is a secondary structure of two interacting RNAs. The minimum free energy is considered as a fitness function for each individual. In each generation, the algorithm is converged to find the optimal secondary structure (minimum free energy structure) of two interacting RNAs by using crossover and mutation operations. Conclusions This algorithm is properly employed for joint secondary structure prediction. The results achieved on a set of known interacting RNA pairs are compared with the other related algorithms and the effectiveness and validity of the proposed algorithm have been demonstrated. It has been shown that time complexity of the algorithm in each iteration is as efficient as the other approaches. PMID:25114714
Academic Achievement of Incarcerated Students
ERIC Educational Resources Information Center
Gasa, V. G.
2011-01-01
The main function of prison-based education is to prepare the inmates for return to society. Many higher institutions of education that offer distance learning have opened their doors to accommodate prisoners who want to further their studies. Thus far, many prisoners have received bachelor's degrees from different higher institutions of education…
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
often has an interpretation in terms of high degree nodes in data graphs, very small clusters in noisy data, coherence of information, articulation points between clusters, and so on. Historically, the first generation of randomized matrix algorithms (to be described in Section 29.3) did not gain a foothold in NLA and only heuristic variants of them were used in machine learning and data analysis applications. In the second generation of randomized matrix algorithms (to be described in Sections 29.4 and 29.5) that has led to high-quality numerical implementations and useful machine learning and data analysis applications, two key developments were crucial. - Decoupling the randomization from the linear algebra. This was originally implicit within the analysis of the second generation of randomized matrix algorithms, and then it was made explicit. By making this explicit, not only were improved quality of approximation bounds achieved, but also much finer control was achieved in the application of randomization. For example, it permitted easier exploitation of domain expertise, in both numerical analysis and data analysis applications. - Importance of statistical leverage scores. Although these scores have been used historically for outlier detection in statistical regression diagnostics, they have also been crucial in the recent development of randomized matrix algorithms. Roughly, the best random sampling algorithms use these scores to construct an importance sampling distribution to sample with respect to; while the best random projection algorithms rotate to a basis where these scores are approximately uniform and thus in which uniform sampling is appropriate.
Han, Yaozhen; Liu, Xiangjie
2016-05-01
This paper presents a continuous higher-order sliding mode (HOSM) control scheme with time-varying gain for a class of uncertain nonlinear systems. The proposed controller is derived from the concept of geometric homogeneity and super-twisting algorithm, and includes two parts, the first part of which achieves smooth finite time stabilization of pure integrator chains. The second part conquers the twice differentiable uncertainty and realizes system robustness by employing super-twisting algorithm. Particularly, time-varying switching control gain is constructed to reduce the switching control action magnitude to the minimum possible value while keeping the property of finite time convergence. Examples concerning the perturbed triple integrator chains and excitation control for single-machine infinite bus power system are simulated respectively to demonstrate the effectiveness and applicability of the proposed approach. PMID:26920085
Han, Yaozhen; Liu, Xiangjie
2016-05-01
This paper presents a continuous higher-order sliding mode (HOSM) control scheme with time-varying gain for a class of uncertain nonlinear systems. The proposed controller is derived from the concept of geometric homogeneity and super-twisting algorithm, and includes two parts, the first part of which achieves smooth finite time stabilization of pure integrator chains. The second part conquers the twice differentiable uncertainty and realizes system robustness by employing super-twisting algorithm. Particularly, time-varying switching control gain is constructed to reduce the switching control action magnitude to the minimum possible value while keeping the property of finite time convergence. Examples concerning the perturbed triple integrator chains and excitation control for single-machine infinite bus power system are simulated respectively to demonstrate the effectiveness and applicability of the proposed approach.
Measuring the success of video segmentation algorithms
NASA Astrophysics Data System (ADS)
Power, Gregory J.
2001-12-01
Appropriate segmentation of video is a key step for applications such as video surveillance, video composing, video compression, storage and retrieval, and automated target recognition. Video segmentation algorithms involve dissecting the video into scenes based on shot boundaries as well as local objects and events based on spatial shape and regional motions. Many algorithmic approaches to video segmentation have been recently reported, but many lack measures to quantify the success of the segmentation especially in comparison to other algorithms. This paper suggests multiple bench-top measures for evaluating video segmentation. The paper suggests that the measures are most useful when 'truth' data about the video is available such as precise frame-by- frame object shape. When precise 'truth' data is unavailable, this paper suggests using hand-segmented 'truth' data to measure the success of the video segmentation. Thereby, the ability of the video segmentation algorithm to achieve the same quality of segmentation as the human is obtained in the form of a variance in multiple measures. The paper introduces a suite of measures, each scaled from zero to one. A score of one on a particular measure is a perfect score for a singular segmentation measure. Measures are introduced to evaluate the ability of a segmentation algorithm to correctly detect shot boundaries, to correctly determine spatial shape and to correctly determine temporal shape. The usefulness of the measures are demonstrated on a simple segmenter designed to detect and segment a ping pong ball from a table tennis image sequence.
Adaptive path planning: Algorithm and analysis
Chen, Pang C.
1995-03-01
To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.
"Feeling" Hierarchy: The Pathway from Subjective Social Status to Achievement
ERIC Educational Resources Information Center
Destin, Mesmin; Richman, Scott; Varner, Fatima; Mandara, Jelani
2012-01-01
The current study tested a psychosocial mediation model of the association between subjective social status (SSS) and academic achievement for youth. The sample included 430 high school students from diverse racial/ethnic and socioeconomic backgrounds. Those who perceived themselves to be at higher social status levels had higher GPAs. As…
NASA Astrophysics Data System (ADS)
Chou, Jason; Valley, George C.; Hernandez, Vincent J.; Bennett, Corey V.; Pelz, Larry; Heebner, John; Di Nicola, J. M.; Rever, Matthew; Bowers, Mark
2014-03-01
At the National Ignition Facility (NIF), home of the world's largest laser, a critical pulse screening process is used to ensure safe operating conditions for amplifiers and target optics. To achieve this, high speed recording instrumentation up to 34 GHz measures pulse shape characteristics throughout a facility the size of three football fields—which can be a time consuming procedure. As NIF transitions to higher power handling and increased wavelength flexibility, this lengthy and extensive process will need to be performed far more frequently. We have developed an accelerated highthroughput pulse screener that can identify nonconforming pulses across 48 locations using a single, real-time 34-GHz oscilloscope. Energetic pulse shapes from anywhere in the facility are imprinted onto telecom wavelengths, multiplexed, and transported over fiber without distortion. The critical pulse-screening process at high-energy laser facilities can be reduced from several hours just seconds—allowing greater operational efficiency, agility to system modifications, higher power handling, and reduced costs. Typically, the sampling noise from the oscilloscope places a limit on the achievable signal-to-noise ratio of the measurement, particularly when highly shaped and/or short duration pulses are required by target physicists. We have developed a sophisticated signal processing algorithm for this application that is based on orthogonal matching pursuit (OMP). This algorithm, developed for recovering signals in a compressive sensing system, enables high fidelity single shot screening even for low signal-to-noise ratio measurements.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2016-07-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
Rare Event Detection Algorithm Of Water Quality
NASA Astrophysics Data System (ADS)
Ungs, M. J.
2011-12-01
A novel method is presented describing the development and implementation of an on-line water quality event detection algorithm. An algorithm was developed to distinguish between normal variation in water quality parameters and changes in these parameters triggered by the presence of contaminant spikes. Emphasis is placed on simultaneously limiting the number of false alarms (which are called false positives) that occur and the number of misses (called false negatives). The problem of excessive false alarms is common to existing change detection algorithms. EPA's standard measure of evaluation for event detection algorithms is to have a false alarm rate of less than 0.5 percent and a false positive rate less than 2 percent (EPA 817-R-07-002). A detailed description of the algorithm's development is presented. The algorithm is tested using historical water quality data collected by a public water supply agency at multiple locations and using spiking contaminants developed by the USEPA, Water Security Division. The water quality parameters of specific conductivity, chlorine residual, total organic carbon, pH, and oxidation reduction potential are considered. Abnormal data sets are generated by superimposing water quality changes on the historical or baseline data. Eddies-ET has defined reaction expressions which specify how the peak or spike concentration of a particular contaminant affects each water quality parameter. Nine default contaminants (Eddies-ET) were previously derived from pipe-loop tests performed at EPA's National Homeland Security Research Center (NHSRC) Test and Evaluation (T&E) Facility. A contaminant strength value of approximately 1.5 is considered to be a significant threat. The proposed algorithm has been able to achieve a combined false alarm rate of less than 0.03 percent for both false positives and for false negatives using contaminant spikes of strength 2 or more.
An experimental evaluation of endmember generation algorithms
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Sánchez-Testal, Juan J.; Plaza, Javier; Valencia, David
2005-11-01
Hyperspectral imagery is a new class of image data which is mainly used in remote sensing. It is characterized by a wealth of spatial and spectral information that can be used to improve detection and estimation accuracy in chemical and biological standoff detection applications. Finding spectral endmembers is a very important task in hyperspectral data exploitation. Over the last decade, several algorithms have been proposed to find spectral endmembers in hyperspectral data. Existing algorithms may be categorized into two different classes: 1) endmember extraction algorithms (EEAs), designed to find pure (or purest available) pixels, and 2) endmember generation algorithms (EGAs), designed to find pure spectral signatures. Such a distinction between an EEA and an EGA has never been made before in the literature. In this paper, we explore the concept of endmember generation as opposed to that of endmember extraction by describing our experience with two EGAs: the optical real-time adaptative spectral identification system (ORASIS), which generates endmembers based on spectral criteria, and the automated morphological endmember extraction (AMEE), which generates endmembers based on spatial/spectral criteria. The performance of these two algoriths is compared to that achieved by two standard algorithms which can perform both as EEAs and EGAs, i.e., the pixel purity index (PPI) and the iterative error analysis (IEA). Both the PPI and IEA may also be used to generate new signatures from existing pixel vectors in the input data, as opposed to the ORASIS method, which generates new spectra using an minimum volume transform. A standard algorithm which behaves as an EEA, i.e., the N-FINDR, is also used in the comparison for demonstration purposes. Experimental results provide several intriguing findings that may help hyperspectral data analysts in selection of algorithms for specific applications.
Australian Higher Education Reforms--Unification or Diversification?
ERIC Educational Resources Information Center
Coombe, Leanne
2015-01-01
The higher education policy of the previous Australian government aimed to achieve an internationally competitive higher education sector while expanding access opportunities to all Australians. This policy agenda closely reflects global trends that focus on achieving both quality and equity objectives. In this paper, the formulation and…
A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems
NASA Astrophysics Data System (ADS)
Chan, Tsun-Shan; Kuo, Jen-Chih; Wu, An-Yeu (Andy)
2002-12-01
The discrete multitone (DMT) modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL) and very-high-speed digital subscriber lines (VDSL). Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs). In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.
Quantum Algorithms for Problems in Number Theory, Algebraic Geometry, and Group Theory
NASA Astrophysics Data System (ADS)
van Dam, Wim; Sasaki, Yoshitaka
2013-09-01
Quantum computers can execute algorithms that sometimes dramatically outperform classical computation. Undoubtedly the best-known example of this is Shor's discovery of an efficient quantum algorithm for factoring integers, whereas the same problem appears to be intractable on classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article will review the current state of quantum algorithms, focusing on algorithms for problems with an algebraic flavor that achieve an apparent superpolynomial speedup over classical computation.
Evolutionary pattern search algorithms
Hart, W.E.
1995-09-19
This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.
Radio interferometric calibration via ordered-subsets algorithms: OS-LS and OS-SAGE calibrations
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.; Zaroubi, S.
2013-10-01
The main objective of this work is to accelerate the maximum likelihood (ML) estimation procedure in radio interferometric calibration. We introduce the ordered-subsets-least-squares (OS-LS) and the ordered-subsets-space alternating generalized expectation (OS-SAGE) radio interferometric calibration methods, as a combination of the OS method with the LS and SAGE maximization calibration techniques, respectively. The OS algorithm speeds up the ML estimation and achieves nearly the same level of accuracy of solutions as the one obtained by the non-OS methods. We apply the OS-LS and OS-SAGE calibration methods to simulated observations and show that these methods have a much higher convergence rate relative to the conventional LS and SAGE techniques. Moreover, the obtained results show that the OS-SAGE calibration technique has a superior performance compared to the OS-LS calibration method in the sense of achieving more accurate results while having significantly less computational cost.
Florida's Fit to Achieve Program.
ERIC Educational Resources Information Center
Sander, Allan N.; And Others
1993-01-01
Describes Florida's "Fit to Achieve," a cardiovascular fitness education program for elementary students. Children are taught responsibility for their own cardiovascular fitness through proper exercise, personal exercise habits, and regular aerobic exercise. The program stresses collaborative effort between physical educators and classroom…
Adequacy, Litigation, and Student Achievement
ERIC Educational Resources Information Center
Glenn, William
2008-01-01
The court system has been an increasingly important forum in the attempts to remedy the persistent achievement gaps in American education. In the past twenty years, school finance adequacy litigation has replaced desegregation as the most widely used legal strategy in these efforts. Despite the widespread use of adequacy litigation, few…
Scheduling and Achievement. Research Brief
ERIC Educational Resources Information Center
Walker, Karen
2006-01-01
To use a block schedule or a traditional schedule? Which structure will produce the best and highest achievement rates for students? The research is mixed on this due to numerous variables such as: (1) socioeconomic levels; (2) academic levels; (3) length of time a given schedule has been in operation; (4) strategies being used in the classrooms;…
School Desegregation and Black Achievement.
ERIC Educational Resources Information Center
Cook, Thomas; And Others
Seven papers commissioned by the National Institute of Education in order to clarify the state of recent knowledge about the effects of school desegregation on the academic achievement of black students are contained in this report. The papers, which analyze 19 "core" empirical studies on this topic, include: (1) "What Have Black Children Gained…
Mobility and the Achievement Gap.
ERIC Educational Resources Information Center
Skandera, Hanna; Sousa, Richard
2002-01-01
Research indicates that low achievement scores relate significantly to high school mobility rates. One explanation for this relationship is curricular inconsistency. Some suggest that school choice could contribute to a solution by breaking the link between a child's home address and school address, thus allowing students to remain at one school…
The Racial Academic Achievement Gap
ERIC Educational Resources Information Center
Green, Toneka M.
2008-01-01
Closing the racial academic achievement gap is a problem that must be solved in order for future society to properly function. Minorities including African-American and Latino students' standardized test scores are much lower than white students. By the end of fourth grade, African American, Latino, and poor students of all races are two years…
Can Judges Improve Academic Achievement?
ERIC Educational Resources Information Center
Greene, Jay P.; Trivitt, Julie R.
2008-01-01
Over the last 3 decades student achievement has remained essentially unchanged in the United States, but not for a lack of spending. Over the same period a myriad of education reforms have been suggested and per-pupil spending has more than doubled. Since the 1990s the education reform attempts have frequently included judicial decisions to revise…
Game Addiction and Academic Achievement
ERIC Educational Resources Information Center
Sahin, Mehmet; Gumus, Yusuf Yasin; Dincel, Sezen
2016-01-01
The primary aim of this study was to investigate the correlation between game addiction and academic achievement. The secondary aim was to adapt a self-report instrument to measure game addiction. Three hundred and seventy high school students participated in this study. Data were collected via an online questionnaire that included a brief…
Meeting a Math Achievement Crisis
ERIC Educational Resources Information Center
Jennings, Lenora; Likis, Lori
2005-01-01
An urban community spotlighted declining mathematics achievement and took some measures, in which the students' performance increased substantially. The Benjamin Banneker Charter Public School in Cambridge, Massachusetts, engaged the entire community and launched the campaign called "Math Everywhere", which changed Benjamin Banneker's culture as…
Achieving Results in MBA Communication.
ERIC Educational Resources Information Center
Barrett, Deborah J.
2002-01-01
Describes how Rice University's Jones Graduate School of Management achieves their mission for the communication program. Discusses three keys to the success of the program: individual coaching, integrated team instruction, and constant assessment of the students and the program. Presents an overview of the program. (SG)
Attribution Theory in Science Achievement
ERIC Educational Resources Information Center
Craig, Martin
2013-01-01
Recent research reveals consistent lags in American students' science achievement scores. Not only are the scores lower in the United States compared to other developed nations, but even within the United States, too many students are well below science proficiency scores for their grade levels. The current research addresses this problem by…
Graders' Mathematics Achievement
ERIC Educational Resources Information Center
Bond, John B.; Ellis, Arthur K.
2013-01-01
The purpose of this experimental study was to investigate the effects of metacognitive reflective assessment instruction on student achievement in mathematics. The study compared the performance of 141 students who practiced reflective assessment strategies with students who did not. A posttest-only control group design was employed, and results…
Epistemological Beliefs and Academic Achievement
ERIC Educational Resources Information Center
Arslantas, Halis Adnan
2016-01-01
This study aimed to identify the relationship between teacher candidates' epistemological beliefs and academic achievement. The participants of the study were 353 teacher candidates studying their fourth year at the Education Faculty. The Epistemological Belief Scale was used which adapted to Turkish through reliability and validity work by…
Achieving a sustainable service advantage.
Coyne, K P
1993-01-01
Many managers believe that superior service should play little or no role in competitive strategy; they maintain that service innovations are inherently copiable. However, the author states that this view is too narrow. For a company to achieve a lasting service advantage, it must base a new service on a capability gap that competitors cannot or will not copy.
Achievement in Two School Cultures.
ERIC Educational Resources Information Center
Borth, Audrey M.
The purpose of the study was to assess non-intellective correlates of achievement in a lower-class, all black, urban elementary school. These students were compared with a University school population which was different in many dimensions. There were residual similarities relative to the general role of the elementary school student. In neither…
Literacy Achievement in Nongraded Classrooms
ERIC Educational Resources Information Center
Kreide, Anita Therese
2011-01-01
This longitudinal quantitative study compared literacy achievement of students from second through sixth grade based on two organizational systems: graded (traditional) and nongraded (multiage) classrooms. The California Standards Test (CST) scaled and proficiency scores for English-Language Arts (ELA) were used as the study's independent variable…
PREDICTING ACHIEVEMENT FOR DEAF CHILDREN.
ERIC Educational Resources Information Center
BONHAM, S.J., JR.
THIS STUDY WAS DONE TO DETERMINE THE PREDICTIVE VALUE OF INDIVIDUAL AND GROUP ACHIEVEMENT TESTS WHEN USED TO EVALUATE DEAF CHILDREN. THE 36 CHILDREN SELECTED FOR THIS STUDY WERE IN GRADES 2, 4, AND 6 IN THE KENNEDY SCHOOL IN DAYTON, OHIO. ALL HAD SEVERE AUDITORY HANDICAPS AND WERE 10 TO 16 YEARS OLD. FOUR PSYCHOLOGISTS ADMINISTERED THE FOLLOWING…
Washington State's Student Achievement Initiative
ERIC Educational Resources Information Center
Pettitt, Maureen; Prince, David
2010-01-01
This article describes Washington State's Student Achievement Initiative, an accountability system implemented in 2005-06 that measures students' gains in college readiness, college credits earned, and degree or certificate completion. The goal of the initiative is to increase educational attainment by focusing on the critical momentum points…
Perlman receives Sustained Achievement Award
NASA Astrophysics Data System (ADS)
Petit, Charles; Perlman, David
David Perlman was awarded the Sustained Achievement Award at the AGU Fall Meeting Honors Ceremony, which was held on December 10, 1997, in San Francisco, California. The award recognizes a journalist who has made significant, lasting, and consistent contributions to accurate reporting or writing on the geophysical sciences for the general public.
Great achievements by dedicated nurses.
Whyte, Alison
2016-04-27
Like many nurses, those featured here are motivated by a desire to do everything they can to give high quality care to their patients. Nurses are often reluctant to seek recognition for their achievements, but by talking publicly about the difference they have made, Gillian Elwood, Anja Templin and Sandra Wood are helping to share good practice. PMID:27191295
The Widening Income Achievement Gap
ERIC Educational Resources Information Center
Reardon, Sean F.
2013-01-01
Has the academic achievement gap between high-income and low-income students changed over the last few decades? If so, why? And what can schools do about it? Researcher Sean F. Reardon conducted a comprehensive analysis of research to answer these questions and came up with some striking findings. In this article, he shows that income-related…
Goal Setting to Achieve Results
ERIC Educational Resources Information Center
Newman, Rich
2012-01-01
Both districts and individual schools have a very clear set of goals and skills for their students to achieve and master. In fact, except in rare cases, districts and schools develop very detailed goals they wish to pursue. In most cases, unfortunately, only the teachers and staff at a particular school or district-level office are aware of the…
Helping Rural Schools Achieve Success.
ERIC Educational Resources Information Center
Collins, Susan
2003-01-01
Senator Collins of Maine plans to fight for proper federal funding of the Rural Education Achievement Program (REAP) that allows rural schools to combine federal funding sources. Collins, and Senator Dianne Feinstein, will soon introduce legislation that will eliminate inequities in the current Social Security law that penalize teachers and other…
School Districts and Student Achievement
ERIC Educational Resources Information Center
Chingos, Matthew M.; Whitehurst, Grover J.; Gallaher, Michael R.
2015-01-01
School districts are a focus of education reform efforts in the United States, but there is very little existing research about how important they are to student achievement. We fill this gap in the literature using 10 years of student-level, statewide data on fourth- and fifth-grade students in Florida and North Carolina. A variance decomposition…
Potential-Based Achievement Goals
ERIC Educational Resources Information Center
Elliot, Andrew; Murayama, Kou; Kobeisy, Ahmed; Lichtenfeld, Stephanie
2015-01-01
Background: Self-based achievement goals use one's own intrapersonal trajectory as a standard of evaluation, and this intrapersonal trajectory may be grounded in one's past (past-based goals) or one's future potential (potential-based goals). Potential-based goals have been overlooked in the literature to date. Aims: The primary aim of the present…
Socioeconomic Determinants of Academic Achievement
ERIC Educational Resources Information Center
Tomul, Ekber; Savasci, Havva Sebile
2012-01-01
This study aims to investigate the relationship between academic achievement and the socioeconomic characteristics of elementary school 7th grade students in Burdur. The population of the study are 7th grade students who had education at elementary schools in Burdur in the 2007-2008 academic year. Two staged sampling was chosen as suitable for the…
ERIC Educational Resources Information Center
Altbach, Philip G.
The comparative higher education course offered at the State University of New York at Buffalo is briefly described, and a course schedule is presented, including required and recommended readings for each topic. The course is intended to provide a broad cross-cultural perspective and considers the growth and development of universities in Europe,…
ERIC Educational Resources Information Center
Pillay, Gerald J.
2009-01-01
The question of the value of higher education is today set in the context of an unprecedented banking and financial crisis. In this context of fundamental change and financial realignment, it is important that we as members of the university remake our case for why the university deserves to be considered alongside all those other worthy causes…