Korean Experience and Achievement in Higher Education
ERIC Educational Resources Information Center
Lee, Jeong-Kyu
2001-01-01
The purpose of this paper is to introduce the transition of Korean education reform and to weigh Korean experience and achievement in contemporary higher education. The paper first of all illustrates a historical perspective on higher education in light of educational reform. Secondly, this study reviews the achievements of Korean higher education…
Using Records of Achievement in Higher Education.
ERIC Educational Resources Information Center
Assiter, Alison, Ed.; Shaw, Eileen, Ed.
This collection of 22 essays examines the use of records of achievement (student profiles or portfolios) in higher and vocational education in the United Kingdom. They include: (1) "Records of Achievement: Background, Definitions, and Uses" (Alison Assiter and Eileen Shaw); (2) "Profiling in Higher Education" (Alison Assiter and Angela Fenwick);…
Higher Education Is Key To Achieving MDGs
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
Imagine trying to achieve the Millennium Development Goals (MGDs) without higher education. As key institutions of civil society, universities are uniquely positioned between the communities they serve and the governments they advise. Through the CIDA-funded University Partnerships in Cooperation and Development program, Canadian universities have…
Higher Education Counts: Achieving Results. 2007 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2007
2007-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2009 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2006 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results. 2008 Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's state system of higher education, as required under Connecticut General Statutes Section 10a-6a. The report contains accountability measures developed through the Performance Measures Task Force and approved by the Board of Governors for Higher Education. The measures…
Higher Education Counts: Achieving Results, 2011. Report
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2011
2011-01-01
This report, issued by the Connecticut Department of Higher Education, reports on trends in higher education for the year 2011. Six goals are presented, each with at least two indicators. Each indicator is broken down into the following subsections: About This Indicator; Highlights; and In the Future. Most indicators also include statistical…
Achieving Quality Learning in Higher Education.
ERIC Educational Resources Information Center
Nightingale, Peggy; O'Neil, Mike
This volume on quality learning in higher education discusses issues of good practice particularly action learning and Total Quality Management (TQM)-type strategies and illustrates them with seven case studies in Australia and the United Kingdom. Chapter 1 discusses issues and problems in defining quality in higher education. Chapter 2 looks at…
Achievable Polarization for Heat-Bath Algorithmic Cooling.
Rodríguez-Briones, Nayeli Azucena; Laflamme, Raymond
2016-04-29
Pure quantum states play a central role in applications of quantum information, both as initial states for quantum algorithms and as resources for quantum error correction. Preparation of highly pure states that satisfy the threshold for quantum error correction remains a challenge, not only for ensemble implementations like NMR or ESR but also for other technologies. Heat-bath algorithmic cooling is a method to increase the purity of a set of qubits coupled to a bath. We investigated the achievable polarization by analyzing the limit when no more entropy can be extracted from the system. In particular, we give an analytic form for the maximum polarization achievable for the case when the initial state of the qubits is totally mixed, and the corresponding steady state of the whole system. It is, however, possible to reach higher polarization while starting with certain states; thus, our result provides an achievable bound. We also give the number of steps needed to get a specific required polarization. PMID:27176508
Achievable Polarization for Heat-Bath Algorithmic Cooling
NASA Astrophysics Data System (ADS)
Rodríguez-Briones, Nayeli Azucena; Laflamme, Raymond
2016-04-01
Pure quantum states play a central role in applications of quantum information, both as initial states for quantum algorithms and as resources for quantum error correction. Preparation of highly pure states that satisfy the threshold for quantum error correction remains a challenge, not only for ensemble implementations like NMR or ESR but also for other technologies. Heat-bath algorithmic cooling is a method to increase the purity of a set of qubits coupled to a bath. We investigated the achievable polarization by analyzing the limit when no more entropy can be extracted from the system. In particular, we give an analytic form for the maximum polarization achievable for the case when the initial state of the qubits is totally mixed, and the corresponding steady state of the whole system. It is, however, possible to reach higher polarization while starting with certain states; thus, our result provides an achievable bound. We also give the number of steps needed to get a specific required polarization.
Higher Education Counts: Achieving Results, 2008. Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2008
2008-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results. 2006 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2006
2006-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the principle vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with…
Higher Education Counts: Achieving Results. 2009 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2009
2009-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Higher Education Counts: Achieving Results. 2007 Executive Summary
ERIC Educational Resources Information Center
Connecticut Department of Higher Education (NJ1), 2007
2007-01-01
"Higher Education Counts" is the annual accountability report on Connecticut's system of higher education. Since 2000, the report has been the primary vehicle for reporting higher education's progress toward achieving six, statutorily-defined state goals: (1) To enhance student learning and promote academic excellence; (2) To join with elementary…
Achieving Equity in Higher Education: The Unfinished Agenda
ERIC Educational Resources Information Center
Astin, Alexander W.; Astin, Helen S.
2015-01-01
In this retrospective account of their scholarly work over the past 45 years, Alexander and Helen Astin show how the struggle to achieve greater equity in American higher education is intimately connected to issues of character development, leadership, civic responsibility, and spirituality. While shedding some light on a variety of questions…
Achieving Higher Energies via Passively Driven X-band Structures
NASA Astrophysics Data System (ADS)
Sipahi, Taylan; Sipahi, Nihan; Milton, Stephen; Biedron, Sandra
2014-03-01
Due to their higher intrinsic shunt impedance X-band accelerating structures significant gradients with relatively modest input powers, and this can lead to more compact particle accelerators. At the Colorado State University Accelerator Laboratory (CSUAL) we would like to adapt this technology to our 1.3 GHz L-band accelerator system using a passively driven 11.7 GHz traveling wave X-band configuration that capitalizes on the high shunt impedances achievable in X-band accelerating structures in order to increase our overall beam energy in a manner that does not require investment in an expensive, custom, high-power X-band klystron system. Here we provide the design details of the X-band structures that will allow us to achieve our goal of reaching the maximum practical net potential across the X-band accelerating structure while driven solely by the beam from the L-band system.
Radiosity algorithms using higher order finite element methods
Troutman, R.; Max, N.
1993-08-01
Many of the current radiosity algorithms create a piecewise constant approximation to the actual radiosity. Through interpolation and extrapolation, a continuous solution is obtained. An accurate solution is found by increasing the number of patches which describe the scene. This has the effect of increasing the computation time as well as the memory requirements. By using techniques found in the finite element method, we can incorporate an interpolation function directly into our form factor computation. We can then use less elements to achieve a more accurate solution. Two algorithms, derived from the finite element method, are described and analyzed.
Higher order nonlinear chirp scaling algorithm for medium Earth orbit synthetic aperture radar
NASA Astrophysics Data System (ADS)
Wang, Pengbo; Liu, Wei; Chen, Jie; Yang, Wei; Han, Yu
2015-01-01
Due to the larger orbital arc and longer synthetic aperture time in medium Earth orbit (MEO) synthetic aperture radar (SAR), it is difficult for conventional SAR imaging algorithms to achieve a good imaging result. An improved higher order nonlinear chirp scaling (NLCS) algorithm is presented for MEO SAR imaging. First, the point target spectrum of the modified equivalent squint range model-based signal is derived, where a concise expression is obtained by the method of series reversion. Second, the well-known NLCS algorithm is modified according to the new spectrum and an improved algorithm is developed. The range dependence of the two-dimensional point target reference spectrum is removed by improved CS processing, and accurate focusing is realized through range-matched filter and range-dependent azimuth-matched filter. Simulations are performed to validate the presented algorithm.
Algorithmic and Experimental Computation of Higher-Order Safe Primes
NASA Astrophysics Data System (ADS)
Díaz, R. Durán; Masqué, J. Muñoz
2008-09-01
This paper deals with a class of special primes called safe primes. In the regular definition, an odd prime p is safe if, at least, one of (p±1)/2 is prime. Safe primes have been recommended as factors of RSA moduli. In this paper, the concept of safe primes is extended to higher-order safe primes, and an explicit formula to compute the density of this class of primes in the set of the integers is supplied. Finally, explicit conditions are provided permitting the algorithmic computation of safe primes of arbitrary order. Some experimental results are provided as well.
Elementary School Counselors and Teachers: Collaborators for Higher Student Achievement
ERIC Educational Resources Information Center
Sink, Christopher A.
2008-01-01
In this article I contend that elementary school teachers need to work more closely with school counselors to enhance student learning and academic performance and to narrow the achievement gap among student groups. Research showing the influence that counselors can exert on the educational process is summarized. Using the American School…
Charting the course for nurses' achievement of higher education levels.
Kovner, Christine T; Brewer, Carol; Katigbak, Carina; Djukic, Maja; Fatehi, Farida
2012-01-01
To improve patient outcomes and meet the challenges of the U.S. health care system, the Institute of Medicine recommends higher educational attainment for the nursing workforce. Characteristics of registered nurses (RNs) who pursue additional education are poorly understood, and this information is critical to planning long-term strategies for U.S. nursing education. To identify factors predicting enrollment and completion of an additional degree among those with an associate or bachelor's as their pre-RN licensure degree, we performed logistic regression analysis on data from an ongoing nationally representative panel study following the career trajectories of newly licensed RNs. For associate degree RNs, predictors of obtaining a bachelor's degree are the following: being Black, living in a rural area, nonnursing work experience, higher positive affectivity, higher work motivation, working in the intensive care unit, and working the day shift. For bachelor's RNs, predictors of completing a master's degree are the following: being Black, nonnursing work experience, holding more than one job, working the day shift, working voluntary overtime, lower intent to stay at current employer, and higher work motivation. Mobilizing the nurse workforce toward higher education requires integrated efforts from policy makers, philanthropists, employers, and educators to mitigate the barriers to continuing education. PMID:23158196
Strategies for Increasing Academic Achievement in Higher Education
ERIC Educational Resources Information Center
Ensign, Julene; Woods, Amelia Mays
2014-01-01
Higher education today faces unique challenges. Decreasing student engagement, increasing diversity, and limited resources all contribute to the issues being faced by students, educators, and administrators alike. The unique characteristics and expectations that students bring to their professional programs require new methods of addressing…
A new adaptive GMRES algorithm for achieving high accuracy
Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
Fuzzy Pool Balance: An algorithm to achieve a two dimensional balance in distribute storage systems
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Chen, Gang
2014-06-01
The limitation of scheduling modules and the gradual addition of disk pools in distributed storage systems often result in imbalances among their disk pools in terms of both disk usage and file count. This can cause various problems to the storage system such as single point of failure, low system throughput and imbalanced resource utilization and system loads. An algorithm named Fuzzy Pool Balance (FPB) is proposed here to solve this problem. The input of FPB is the current file distribution among disk pools and the output is a file migration plan indicating what files are to be migrated to which pools. FPB uses an array to classify the files by their sizes. The file classification array is dynamically calculated with a defined threshold named Tmax that defines the allowed pool disk usage deviations. File classification is the basis of file migration. FPB also defines the Immigration Pool (IP) and Emigration Pool (EP) according to the pool disk usage and File Quantity Ratio (FQR) that indicates the percentage of each category of files in each disk pool, so files with higher FQR in an EP will be migrated to IP(s) with a lower FQR of this file category. To verify this algorithm, we implemented FPB on an ATLAS Tier2 dCache production system. The results show that FPB can achieve a very good balance in both free space and file counts, and adjusting the threshold value Tmax and the correction factor to the average FQR can achieve a tradeoff between free space and file count.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-11-01
This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
ERIC Educational Resources Information Center
Kaminskiene, Lina; Stasiunaitiene, Egle
2013-01-01
The article identifies the validity of assessment of non-formal and informal learning achievements (NILA) as one of the key factors for encouraging further development of the process of assessing and recognising non-formal and informal learning achievements in higher education. The authors analyse why the recognition of non-formal and informal…
ERIC Educational Resources Information Center
Arredondo, Patricia; Castillo, Linda G.
2011-01-01
Latina/o student achievement is a priority for the American Association of Hispanics in Higher Education (AAHHE). To date, AAHHE has worked deliberately on this agenda. However, well-established higher education associations such as the Association of American Universities (AAU) and the Association of Public and Land-grant Universities (APLU) are…
Relationship between Study Habits and Academic Achievement of Higher Secondary School Students
ERIC Educational Resources Information Center
Lawrence, A. S. Arul
2014-01-01
The present study was probed to find the significant relationship between study habits and academic achievement of higher secondary school students with reference to the background variables. Survey method was employed. Data for the study were collected from 300 students in 13 higher secondary schools using Study Habits Inventory by V.G. Anantha…
A general higher-order remap algorithm for ALE calculations
Chiravalle, Vincent P
2011-01-05
A numerical technique for solving the equations of fluid dynamics with arbitrary mesh motion is presented. The three phases of the Arbitrary Lagrangian Eulerian (ALE) methodology are outlined: the Lagrangian phase, grid relaxation phase and remap phase. The Lagrangian phase follows a well known approach from the HEMP code; in addition the strain rate andflow divergence are calculated in a consistent manner according to Margolin. A donor cell method from the SALE code forms the basis of the remap step, but unlike SALE a higher order correction based on monotone gradients is also added to the remap. Four test problems were explored to evaluate the fidelity of these numerical techniques, as implemented in a simple test code, written in the C programming language, called Cercion. Novel cell-centered data structures are used in Cercion to reduce the complexity of the programming and maximize the efficiency of memory usage. The locations of the shock and contact discontinuity in the Riemann shock tube problem are well captured. Cercion demonstrates a high degree of symmetry when calculating the Sedov blast wave solution, with a peak density at the shock front that is similar to the value determined by the RAGE code. For a flyer plate test problem both Cercion and FLAG give virtually the same velocity temporal profile at the target-vacuum interface. When calculating a cylindrical implosion of a steel shell, Cercion and FLAG agree well and the Cercion results are insensitive to the use of ALE.
ERIC Educational Resources Information Center
Schmid, Richard F.; Bernard, Robert M.; Borokhovski, Eugene; Tamim, Rana; Abrami, Philip C.; Wade, C. Anne; Surkes, Michael A.; Lowerison, Gretchen
2009-01-01
This paper reports the findings of a Stage I meta-analysis exploring the achievement effects of computer-based technology use in higher education classrooms (non-distance education). An extensive literature search revealed more than 6,000 potentially relevant primary empirical studies. Analysis of a representative sample of 231 studies (k = 310)…
Leveraging Quality Improvement to Achieve Student Learning Assessment Success in Higher Education
ERIC Educational Resources Information Center
Glenn, Nancy Gentry
2009-01-01
Mounting pressure for transformational change in higher education driven by technology, globalization, competition, funding shortages, and increased emphasis on accountability necessitates that universities implement reforms to demonstrate responsiveness to all stakeholders and to provide evidence of student achievement. In the face of the demand…
An Exploratory Study of the Achievement of the Twenty-First Century Skills in Higher Education
ERIC Educational Resources Information Center
Ghaith, Ghazi
2010-01-01
Purpose: The purpose of this paper is to present the results of a survey study of the achievement of twenty-first century skills in higher education. Design/methodology/approach: The study employs a quantitative survey design. Findings: The findings indicate that the basic scientific and technological skills of reading critically and writing…
Achieving Higher Levels of Success for A.D.H.D. Students Working in Collaborative Groups
ERIC Educational Resources Information Center
Simplicio, Joseph S. C.
2007-01-01
This article explores a new and innovative strategy for helping students with Attention Deficit Hyperactivity Disorder (A.D.H.D.) achieve higher levels of academic success when working in collaborative groups. Since the research indicates that students with this disorder often have difficulty in maintaining their concentration this strategy is…
ERIC Educational Resources Information Center
Magen-Nagar, Noga
2016-01-01
The purpose of the current study is to explore the effects of learning strategies on Mathematical Literacy (ML) of students in higher and lower achieving countries. To address this issue, the study utilizes PISA2002 data to conduct a multi-level analysis (HLM) of Hong Kong and Israel students. In PISA2002, Israel was rated 31st in Mathematics,…
An Analysis of Factors Influencing the Achievement of Higher Education by Chief Fire Officers
ERIC Educational Resources Information Center
Ditch, Robert L.
2012-01-01
The leadership of the United States Fire Service (FS) believes that higher education increases the professionalism of FS members. The research problem at the research site, which is a multisite fire department located in southeastern United States, was the lack of research-based findings on the factors influencing the achievement of higher…
Fast algorithm for scaling analysis with higher-order detrending moving average method
NASA Astrophysics Data System (ADS)
Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken
2016-05-01
Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.
Higher-Order, Space-Time Adaptive Finite Volume Methods: Algorithms, Analysis and Applications
Minion, Michael
2014-04-29
The four main goals outlined in the proposal for this project were: 1. Investigate the use of higher-order (in space and time) finite-volume methods for fluid flow problems. 2. Explore the embedding of iterative temporal methods within traditional block-structured AMR algorithms. 3. Develop parallel in time methods for ODEs and PDEs. 4. Work collaboratively with the Center for Computational Sciences and Engineering (CCSE) at Lawrence Berkeley National Lab towards incorporating new algorithms within existing DOE application codes.
ERIC Educational Resources Information Center
Jacobs, Nicky; Harvey, David
2005-01-01
Differences in family factors in determining academic achievement were investigated by testing 432 parents in nine independent, coeducational Melbourne schools. Schools were ranked and categorized into three groups (high, medium and low), based on student achievement (ENTER) scores in their final year of secondary school and school improvement…
Liu, Yinxiao; Jin, Dakai; Saha, Punam K.
2015-01-01
Adult bone diseases, especially osteoporosis, lead to increased risk of fracture associated with substantial morbidity, mortality, and financial costs. Clinically, osteoporosis is defined by low bone mineral density (BMD); however, increasing evidence suggests that the micro-architectural quality of trabecular bone (TB) is an important determinant of bone strength and fracture risk. Accurate measurement of trabecular thickness and marrow spacing is of significant interest for early diagnosis of osteoporosis or treatment effects. Here, we present a new robust algorithm for computing TB thickness and marrow spacing at a low resolution achievable in vivo. The method uses a star-line tracing technique that effectively deals with partial voluming effects of in vivo imaging where voxel size is comparable to TB thickness. Experimental results on cadaveric ankle specimens have demonstrated the algorithm’s robustness (ICC>0.98) under repeat scans of multi-row detector computed tomography (MD-CT) imaging. It has been observed in experimental results that TB thickness and marrow spacing measures as computed by the new algorithm have strong association (R2 ∈{0.85, 0.87}) with TB’s experimental mechanical strength measures. PMID:27330678
Leveraging People-Related Maturity Issues for Achieving Higher Maturity and Capability Levels
NASA Astrophysics Data System (ADS)
Buglione, Luigi
During the past 20 years Maturity Models (MM) become a buzzword in the ICT world. Since the initial Crosby's idea in 1979, plenty of models have been created in the Software & Systems Engineering domains, addressing various perspectives. By analyzing the content of the Process Reference Models (PRM) in many of them, it can be noticed that people-related issues have little weight in the appraisals of the capabilities of organizations while in practice they are considered as significant contributors in traditional process and organizational performance appraisals, as stressed instead in well-known Performance Management models such as MBQA, EFQM and BSC. This paper proposes some ways for leveraging people-related maturity issues merging HR practices from several types of maturity models into the organizational Business Process Model (BPM) in order to achieve higher organizational maturity and capability levels.
Han, Qi-Gang; Yang, Wen-Ke; Zhu, Pin-Wen; Ban, Qing-Chu; Yan, Ni; Zhang, Qiang
2013-07-01
In order to increase the maximum cell pressure of the cubic high pressure apparatus, we have developed a new structure of tungsten carbide cubic anvil (tapered cubic anvil), based on the principle of massive support and lateral support. Our results indicated that the tapered cubic anvil has some advantages. First, tapered cubic anvil can push the transfer rate of pressure well into the range above 36.37% compare to the conventional anvil. Second, the rate of failure crack decreases about 11.20% after the modification of the conventional anvil. Third, the limit of static high-pressure in the sample cell can be extended to 13 GPa, which can increase the maximum cell pressure about 73.3% than that of the conventional anvil. Fourth, the volume of sample cell compressed by tapered cubic anvils can be achieved to 14.13 mm(3) (3 mm diameter × 2 mm long), which is three and six orders of magnitude larger than that of double-stage apparatus and diamond anvil cell, respectively. This work represents a relatively simple method for achieving higher pressures and larger sample cell. PMID:23902079
Pyramiding B genes in cotton achieves broader but not always higher resistance to bacterial blight.
Essenberg, Margaret; Bayles, Melanie B; Pierce, Margaret L; Verhalen, Laval M
2014-10-01
Near-isogenic lines of upland cotton (Gossypium hirsutum) carrying single, race-specific genes B4, BIn, and b7 for resistance to bacterial blight were used to develop a pyramid of lines with all possible combinations of two and three genes to learn whether the pyramid could achieve broad and high resistance approaching that of L. A. Brinkerhoff's exceptional line Im216. Isogenic strains of Xanthomonas axonopodis pv. malvacearum carrying single avirulence (avr) genes were used to identify plants carrying specific resistance (B) genes. Under field conditions in north-central Oklahoma, pyramid lines exhibited broader resistance to individual races and, consequently, higher resistance to a race mixture. It was predicted that lines carrying two or three B genes would also exhibit higher resistance to race 1, which possesses many avr genes. Although some enhancements were observed, they did not approach the level of resistance of Im216. In a growth chamber, bacterial populations attained by race 1 in and on leaves of the pyramid lines decreased significantly with increasing number of B genes in only one of four experiments. The older lines, Im216 and AcHR, exhibited considerably lower bacterial populations than any of the one-, two-, or three-B-gene lines. A spreading collapse of spray-inoculated AcBIn and AcBInb7 leaves appears to be a defense response (conditioned by BIn) that is out of control. PMID:24655289
Effects of Traditional, Blended and E-Learning on Students' Achievement in Higher Education
ERIC Educational Resources Information Center
Al-Qahtani, Awadh A. Y.; Higgins, S. E.
2013-01-01
The study investigates the effect of e-learning, blended learning and classroom learning on students' achievement. Two experimental groups together with a control group from Umm Al-Qura University in Saudi Arabia were identified randomly. To assess students' achievement in the different groups, pre- and post-achievement tests were used. The…
Harmon, Tyler S; Crabtree, Michael D; Shammas, Sarah L; Posey, Ammon E; Clarke, Jane; Pappu, Rohit V
2016-09-01
Many intrinsically disordered proteins (IDPs) participate in coupled folding and binding reactions and form alpha helical structures in their bound complexes. Alanine, glycine, or proline scanning mutagenesis approaches are often used to dissect the contributions of intrinsic helicities to coupled folding and binding. These experiments can yield confounding results because the mutagenesis strategy changes the amino acid compositions of IDPs. Therefore, an important next step in mutagenesis-based approaches to mechanistic studies of coupled folding and binding is the design of sequences that satisfy three major constraints. These are (i) achieving a target intrinsic alpha helicity profile; (ii) fixing the positions of residues corresponding to the binding interface; and (iii) maintaining the native amino acid composition. Here, we report the development of a G: enetic A: lgorithm for D: esign of I: ntrinsic secondary S: tructure (GADIS) for designing sequences that satisfy the specified constraints. We describe the algorithm and present results to demonstrate the applicability of GADIS by designing sequence variants of the intrinsically disordered PUMA system that undergoes coupled folding and binding to Mcl-1. Our sequence designs span a range of intrinsic helicity profiles. The predicted variations in sequence-encoded mean helicities are tested against experimental measurements. PMID:27503953
ERIC Educational Resources Information Center
Rouse, Martyn; Florian, Lani
2006-01-01
This paper reports on a multi-method study that examined the effects of including higher and lower proportions of students designated as having special educational needs on student achievement in secondary schools. It explores some of the issues involved in conducting such research and considers the extent to which newly available national data in…
ERIC Educational Resources Information Center
Borman, Geoffrey D.; Kimball, Steven M.
2005-01-01
Using standards-based evaluation ratings for nearly 400 teachers, and achievement results for over 7,000 students from grades 4-6, this study investigated the distribution and achievement effects of teacher quality in Washoe County, a mid-sized school district serving Reno and Sparks, Nevada. Classrooms with higher concentrations of minority,…
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
What Is the Best Way to Achieve Broader Reach of Improved Practices in Higher Education?
ERIC Educational Resources Information Center
Kezar, Adrianna
2011-01-01
This article examines a common problem in higher education--how to create more widespread use of improved practices, often commonly referred to as innovations. I argue that policy models of scale-up are often advocated in higher education but that they have a dubious history in community development and K-12 education and that higher education…
ERIC Educational Resources Information Center
Catalano, D. Chase J.
2015-01-01
Trans* men have not, as yet, received specific research attention in higher education. Based on intensive interviews with 25 trans* men enrolled in colleges or universities in New England, I explore their experiences in higher education. I analyze participants' descriptions of supports and challenges in their collegiate environments, as well as…
Using the Internet To Deliver Higher Education: A Cautionary Tale about Achieving Good Practice.
ERIC Educational Resources Information Center
Coombs, Steven J.; Rodd, Jillian
2001-01-01
Reviews the development and delivery of a higher education course module that was designed to provide remote learners in England with computer-supported solutions to access higher education as part of a technology-assisted distance education program. Highlights include use of a Web site; e-mail; videoconferencing; and student attrition rate.…
Higher Education and the Achievement (and/or Prevention) of Equity and Social Justice
ERIC Educational Resources Information Center
Brennan, John; Naidoo, Rajani
2008-01-01
The article examines the theoretical and empirical literature on higher education's role in relation to social equity and related notions of citizenship, social justice, social cohesion and meritocracy. It considers both the education and the research functions of higher education and how these impact upon different sections of society, on who…
ERIC Educational Resources Information Center
Murphy, David; Williams, Jeff
1997-01-01
Describes four successful cost-containment initiatives of the Midwestern Higher Education Commission, which was established to advance higher education in the Midwest through interstate cooperation. Projects include development of Academic Scheduling and Management Software; Internet-based activities; the Virtual Private Network, to reduce…
Colonialism on Campus: A Critique of Mentoring to Achieve Equity in Higher Education.
ERIC Educational Resources Information Center
Collins, Roger L.
In order to reconceptualize the mentoring relationship in higher education, parallels to colonialist strategies of subordination are drawn. The objective is to stimulate renewed thinking and action more consistent with stated policy goals in higher education. One of the primary functions of a mentor or sponsor is to exercise personal power to…
The Effects of Higher Education/Military Service on Achievement Levels of Police Academy Cadets.
ERIC Educational Resources Information Center
Johnson, Thomas Allen
This study compared levels of achievement of three groups of Houston (Texas) police academy cadets: those with no military service but with 60 or more college credit hours, those with military service and 0 hours of college credit, and those with military service and 1 to 59 hours of college credit. Prior to 1991, police cadets in Houston were…
ERIC Educational Resources Information Center
Dupont, Serge; Meert, Gaëlle; Galand, Benoît; Nils, Frédéric
2013-01-01
Research on academic achievement at a university has mainly focused on success and persistence among first year students. Very few studies have looked at delay or failure in the completion of a final dissertation. However, this phenomenon could affect a substantial proportion of students and has considerable costs. The purpose of the present study…
Gender Segregation in Higher Education: Effects of Aspirations, Mathematics Achievement, and Income.
ERIC Educational Resources Information Center
Wilson, Kenneth L; Boldizar, Janet P.
1990-01-01
Analyzes the relationships among mathematics achievement levels, income potential, high school aspirations, and the gender segregation of bachelor's degrees. Investigates how gender segregation changed between 1973 and 1983. Concludes that gender segregation is present at the high school and bachelor's levels. Maintains that psychological barriers…
ERIC Educational Resources Information Center
Mc Beth, Maureen
2010-01-01
This study provides important insights into the relationship between the epistemological beliefs of community college students, the selection of learning strategies, and academic achievement. This study employed a quantitative survey design. Data were collected by surveying students at a community college during the spring semester of 2010. The…
Success in Higher Education: The Challenge to Achieve Academic Standing and Social Position
ERIC Educational Resources Information Center
Life, James
2015-01-01
When students look at their classmates in the classroom, consciously or unconsciously, they see competitors both for academic recognition and social success. How do they fit in relation to others and how do they succeed in achieving both? Traditional views on the drive to succeed and the fear of failure are well known as motivators for achieving…
ERIC Educational Resources Information Center
Parisi, Joe
2012-01-01
This paper explores several research questions that identify differences between conditionally admitted students and regularly admitted students in terms of achievement results at one institution. The research provides specific variables as well as relationships including historical and comparative aggregate data from 2009 and 2010 that indicate…
The Little District that Could: Literacy Reform Leads to Higher Achievement in California District
ERIC Educational Resources Information Center
Kelly, Patricia R.; Budicin-Senters, Antoinette; King, L. McLean
2005-01-01
This article describes educational reform developed over a 10-year period in California's Lemon Grove School District, which resulted in a steady and remarkable upward shift in achievement for the students of this multicultural district just outside San Diego. Six elements of literacy reform emerged as the most significant factors affecting…
ERIC Educational Resources Information Center
Usun, Salih
2004-01-01
The main aim of this study was to determine the opinions of the undergraduate students and faculty members on factors that affect student learning and academic achievement. The sub aims of this study were to: (1) Develop a mean rank ordering of the 23 dimensions affecting learning, for both the students and faculty, and determine the similarities…
ERIC Educational Resources Information Center
Eshetu, Amogne Asfaw
2015-01-01
Gender is among the determinant factors affecting students' academic achievement. This paper tried to investigate the impact of gender on academic performance of preparatory secondary school students based on 2014 EHEECE result. Ex post facto research design was used. To that end, data were collected from 3243 students from eight purposively…
ERIC Educational Resources Information Center
Myers, Carrie B.; Brown, Doreen E.; Pavel, D. Michael
2010-01-01
The purpose of this study was to assess how a comprehensive precollege intervention and developmental program among low-income high school students contributed to college enrollment outcomes measured in 2006. Our focus was on the Fifth Cohort of the Washington State Achievers (WSA) Program, which provides financial, academic, and college…
WISC-III and CAS: Which Correlates Higher with Achievement for a Clinical Sample?
ERIC Educational Resources Information Center
Naglieri, Jack A.; De Lauder, Brianna Y.; Goldstein, Sam; Schwebech, Adam
2006-01-01
The relationships between Wechsler Intelligence Scale for Children-Third Edition (WISC-III) and the Cognitive Assessment System (CAS) with the Woodcock-Johnson Tests of Achievement (WJ-III) were examined for a sample of 119 children (87 males and 32 females) ages 6 to 16. The sample was comprised of children who were referred to a specialty clinic…
Dodge, Cristina T; Tamm, Eric P; Cody, Dianna D; Liu, Xinming; Jensen, Corey T; Wei, Wei; Kundra, Vikas; Rong, John
2016-01-01
The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative recon-struction (ASiR), and model-based iterative reconstruction (MBIR), over a range of typical to low-dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat-equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back-projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low-contrast detectability were evaluated from noise and contrast-to-noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were con-firmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1mGy. MBIR reduced noise levels five-fold and increased CNR by a factor of five compared to FBP below 6mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high-contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial resolution for MBIR
ERIC Educational Resources Information Center
Marschke, Robyn; Laursen, Sandra; Nielsen, Joyce McCarl; Rankin, Patricia
2007-01-01
Progress toward equitable gender representation among faculty in higher education has been "glacial" since the early 1970s (Glazer-Raymo, 1999; Lomperis, 1990; Trower & Chait, 2002). Women, who now make up a majority of undergraduate degree earners and approximately 46% of Ph.D. earners nationwide (National Center for Education Statistics [NCES],…
ERIC Educational Resources Information Center
Association of Universities and Colleges of Canada, 2004
2004-01-01
As Canada's opportunities to claim international leadership are assessed, the best prospects lie in a combination of our impressive higher education and research commitments, civic and institutional values, and quality of life. This paper concludes that as an exporting country, the benefits will come in economic growth. As citizens of the world,…
Linking Emotional Intelligence to Achieve Technology Enhanced Learning in Higher Education
ERIC Educational Resources Information Center
Kruger, Janette; Blignaut, A. Seugnet
2013-01-01
Higher education institutions (HEIs) increasingly use technology-enhanced learning (TEL) environments (e.g. blended learning and e-learning) to improve student throughput and retention rates. As the demand for TEL courses increases, expectations rise for faculty to meet the challenge of using TEL effectively. The promises that TEL holds have not…
ERIC Educational Resources Information Center
Ho, Hsuan-Fu; Lin, Ming-Huang; Yang, Cheng-Cheng
2015-01-01
International knowledge and skills are essential for success in today's highly competitive global marketplace. As one of the key providers of such knowledge and skills, universities have become a key focus of the internationalization strategies of governments throughout the world. While the internationalization of higher education clearly has…
Achieving Higher Accuracy in the Gamma-Ray Spectrocopic Assay of Holdup
Russo, P.A.; Wenz, T.R.; Smith, S.E.; Harris, J.F.
2000-09-01
compelling to use these procedures. The algorithms and the procedures are simple, general, and easily automated for use plant-wide. This paper shows the derivation of the new, generalized correction algorithms for finite-source and self-attenuation effects. It also presents an analysis of the sensitivity of the holdup result to the uncertainty in the empirical parameter when one or both corrections are made. The paper uses specific examples of the magnitudes of finite-source and self-attenuation corrections to measurements that were made in the field. It discusses the automated implementation of the correction procedure.
ERIC Educational Resources Information Center
Klapproth, Florian
2015-01-01
Two objectives guided this research. First, this study examined how well teachers' tracking decisions contribute to the homogenization of their students' achievements. Second, the study explored whether teachers' tracking decisions would be outperformed in homogenizing the students' achievements by statistical models of tracking decisions. These…
Moving to higher ground: Closing the high school science achievement gap
NASA Astrophysics Data System (ADS)
Mebane, Joyce Graham
The purpose of this study was to examine the perceptions of West High School constituents (students, parents, teachers, administrators, and guidance counselors) about the readiness and interest of African American students at West High School to take Advanced Placement (AP) and International Baccalaureate (IB) science courses as a strategy for closing the achievement gap. This case study utilized individual interviews and questionnaires for data collection. The participants were selected biology students and their parents, teachers, administrators, and guidance counselors at West High School. The results of the study indicated that just over half the students and teachers, most parents, and all guidance counselors thought African American students were prepared to take AP science courses. Only one of the three administrators thought the students were prepared to take AP science courses. Between one-half and two-thirds of the students, parents, teachers, and administrators thought students were interested in taking an AP science course. Only two of the guidance counselors thought there was interest among the African American students in taking AP science courses. The general consensus among the constituents about the readiness and interest of African American students at West High School to take IB science courses was that it is too early in the process to really make definitive statements. West is a prospective IB school and the program is new and not yet in place. Educators at the West High School community must find reasons to expect each student to succeed. Lower expectations often translate into lower academic demands and less rigor in courses. Lower academic demands and less rigor in courses translate into less than adequate performance by students. When teachers and administrators maintain high expectations, they encourage students to aim high rather than slide by with mediocre effort (Lumsden, 1997). As a result of the study, the following suggestions should
ERIC Educational Resources Information Center
Alstete, Jeffrey W.
2004-01-01
This book focuses on contemporary accreditation, why it matters, and how it can be done effectively. The author covers historical background, getting started, strategies for achieving accreditation, and visions for future academic success, with examples and case studies. Accreditation is the primary way of ensuring the quality of higher education…
ERIC Educational Resources Information Center
Gulacar, Ozcan; Eilks, Ingo; Bowman, Charles R.
2014-01-01
This paper reports a comparison of a group of higher-and lower-achieving undergraduate chemistry students, 17 in total, as separated on their ability in stoichiometry. This exploratory study of 17 students investigated parallels and differences in the students' general and domain-specific cognitive abilities. Performance, strategies, and…
ERIC Educational Resources Information Center
Keeley, Thomas Allen
2010-01-01
The purpose of this study was to determine whether the areas of teaching methods, teacher-student relationships, school structure, school-community partnerships or school leadership were significantly embedded in practice and acted as a change agent among school systems that achieve higher than expected results on their state standardized testing…
ERIC Educational Resources Information Center
Sarwar, Muhammad; Ashrafi, Ghulam Muhammad
2014-01-01
The purpose of this study was to analyze Students' Commitment, Engagement and Locus of Control as predictors of Academic Achievement at Higher Education Level. We used analytical model and conclusive research approach to conduct study and survey method for data collection. We selected 369 students using multistage sampling technique from…
ERIC Educational Resources Information Center
Schlechter, Melissa; Milevsky, Avidan
2010-01-01
The purpose of the current study is to determine the interconnection between parental level of education, psychological well-being, academic achievement and reasons for pursuing higher education in adolescents. Participants included 439 college freshmen from a mid-size state university in the northeastern USA. A survey, including indices of…
Achieving Higher Diagnostic Results in Stereotactic Brain Biopsy by Simple and Novel Technique
Gulsen, Salih
2015-01-01
BACKGROUND: Neurosurgeons have preferred to perform the stereotactic biopsy for pathologic diagnosis when the intracranial pathology located eloquent areas and deep sites of the brain. AIM: To get a higher ratio of definite pathologic diagnosis during stereotactic biopsy and develop practical method. MATERIAL AND METHODS: We determined at least two different target points and two different trajectories to take brain biopsy during stereotactic biopsy. It is a different way from the conventional stereotactic biopsy method in which one point has been selected to take a biopsy. We separated our patients into two groups, group 1 (N=10), and group 2 (N= 19). We chose one target to take a biopsy in group 1, and two different targets and two different trajectories in group 2. In group 2, one patient underwent craniotomy due to hemorrhage at the site of the biopsy during tissue biting. However, none of the patients in both groups suffered any neurological complication related biopsy procedure. RESULTS: In group 1, two of 10 cases, and, in group 2, fourteen of 19 cases had positive biopsy harvesting. These results showed statistically significant difference between group 1 and group 2 (P<0.05). CONCLUSIONS: Regarding these results, choosing more than one trajectories and taking at least six specimens from each target provides higher diagnostic rate in stereotaxic biopsy taking method.
Jet algorithms in electron-positron annihilation: perturbative higher order predictions
NASA Astrophysics Data System (ADS)
Weinzierl, Stefan
2011-02-01
This article gives results on several jet algorithms in electron-positron annihilation: Considered are the exclusive sequential recombination algorithms Durham, Geneva, Jade-E0 and Cambridge, which are typically used in electron-positron annihilation. In addition also inclusive jet algorithms are studied. Results are provided for the inclusive sequential recombination algorithms Durham, Aachen and anti- k t , as well as the infrared-safe cone algorithm SISCone. The results are obtained in perturbative QCD and are N3LO for the two-jet rates, NNLO for the three-jet rates, NLO for the four-jet rates and LO for the five-jet rates.
Beaujean, A Alexander; Parkin, Jason; Parker, Sonia
2014-09-01
Previous research using the Cattell-Horn-Carroll (CHC) theory of cognitive abilities has shown a relationship between cognitive ability and academic achievement. Most of this research, however, has been done using the Woodcock-Johnson family of instruments with a higher order factor model. For CHC theory to grow, research should be done with other assessment instruments and tested with other factor models. This study examined the relationship between different factor models of CHC theory and the factors' relationships with language-based academic achievement (i.e., reading and writing). Using the co-norming sample for the Wechsler Intelligence Scale for Children--4th Edition and the Wechsler Individual Achievement Test--2nd Edition, we found that bifactor and higher order models of the subtests of the Wechsler Intelligence Scale for Children-4th Edition produced a different set of Stratum II factors, which, in turn, have very different relationships with the language achievement variables of the Wechsler Individual Achievement Test--2nd Edition. We conclude that the factor model used to represent CHC theory makes little difference when general intelligence is of major interest, but it makes a large difference when the Stratum II factors are of primary concern, especially when they are used to predict other variables. PMID:24840178
NASA Astrophysics Data System (ADS)
Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick
2001-12-01
High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.
NASA Astrophysics Data System (ADS)
Putro, Budi Laksono; Surendro, Kridanto; Herbert
2016-02-01
Data is a vital asset in a business enterprise in achieving organizational goals. Data and information affect the decision-making process on the various activities of an organization. Data problems include validity, quality, duplication, control over data, and the difficulty of data availability. Data Governance is the way the company / institution manages its data assets. Data Governance covers the rules, policies, procedures, roles and responsibilities, and performance indicators that direct the overall management of data assets. Studies on governance data or information aplenty recommend the importance of cultural factors in the governance of research data. Among the organization's leadership culture has a very close relationship, and there are two concepts turn, namely: Culture created by leaders, leaders created by culture. Based on the above, this study exposure to the theme "Leadership and Culture Of Data Governance For The Achievement Of Higher Education Goals (Case Study: Indonesia University Of Education)". Culture and Leadership Model Development of on Higher Education in Indonesia would be made by comparing several models of data governance, organizational culture, and organizational leadership on previous studies based on the advantages and disadvantages of each model to the existing organizational business. Results of data governance model development is shown in the organizational culture FPMIPA Indonesia University Of Education today is the cultural market and desired culture is a culture of clan. Organizational leadership today is Individualism Index (IDV) (83.72%), and situational leadership on selling position.
NASA Astrophysics Data System (ADS)
Erlick, Katherine
"The stereotype of engineers is that they are not people oriented; the stereotype implies that engineers would not work well in teams---that their task emphasis is a solo venture and does not encourage social aspects of collaboration" (Miner & Beyerlein, 1999, p. 16). The problem is determining the best method of providing a motivating environment where design engineers may contribute within a team in order to achieve higher performance in the organization. Theoretically, self-directed work teams perform at higher levels. But, allowing a design engineer to contribute to the team while still maintaining his or her anonymity is the key to success. Therefore, a motivating environment must be established to encourage greater self-actualization in design engineers. The purpose of this study is to determine the favorable motivational environment for design engineers and describe the comparison between two aerospace design-engineering teams: one self-directed and the other manager directed. Following the comparison, this study identified whether self-direction or manager-direction provides the favorable motivational environment for operating as a team in pursuit of achieving higher performance. The methodology used in this research was the case study focusing on the team's levels of job satisfaction and potential for higher performance. The collection of data came from three sources, (a) surveys, (b) researcher observer journal and (c) collection of artifacts. The surveys provided information regarding personal behavior characteristics, potentiality for higher performance and motivational attributes. The researcher journal provided information regarding team dynamics, individual interaction, conflict and conflict resolution. The milestone for performance was based on the collection of artifacts from the two teams. The findings from this study illustrated that whether the team was manager-directed or self-directed does not appear to influence the needs and wants of the
General relaxation schemes in multigrid algorithms for higher order singularity methods
NASA Technical Reports Server (NTRS)
Oskam, B.; Fray, J. M. J.
1981-01-01
Relaxation schemes based on approximate and incomplete factorization technique (AF) are described. The AF schemes allow construction of a fast multigrid method for solving integral equations of the second and first kind. The smoothing factors for integral equations of the first kind, and comparison with similar results from the second kind of equations are a novel item. Application of the MD algorithm shows convergence to the level of truncation error of a second order accurate panel method.
ERIC Educational Resources Information Center
Stringer, Neil
2008-01-01
Advocates of using a US-style SAT for university selection claim that it is fairer to applicants from disadvantaged backgrounds than achievement tests because it assesses potential, not achievement, and that it allows finer discrimination between top applicants than GCEs. The pros and cons of aptitude tests in principle are discussed, focusing on…
ERIC Educational Resources Information Center
Siahi, Evans Atsiaya; Maiyo, Julius K.
2015-01-01
The studies on the correlation of academic achievement have paved way for control and manipulation of related variables for quality results in schools. In spite of the facts that schools impart uniform classroom instructions to all students, wide range of difference is observed in their academic achievement. The study sought to determine the…
ERIC Educational Resources Information Center
Latha, Prema
2014-01-01
Disturbing sounds are often referred to as noise, and if extreme enough in degree, intensity or frequency, it is referred to as noise pollution. Achievement refers to a change in study behavior in relation to their noise sensitivity and learning in the educational sense by achieving results in changed responses to certain types of stimuli like…
ERIC Educational Resources Information Center
Wright, Bobby
This paper reviews the history of higher education for Native Americans and proposes change strategies. Assimilation was the primary goal of higher education from early colonial times to the 20th century. Tribal response ranged from resistance to support of higher education. When the Federal Government began to dominate Native education in the…
ERIC Educational Resources Information Center
Pickert, Sarah M.
This report discusses the response of colleges and universities in the United States to the need of graduate students to become equipped to make personal and public policy decisions as citizens of an international society. Curriculum changes are showing a tightening of foreign language standards in schools of higher education and, throughout the…
ERIC Educational Resources Information Center
Ehrlich, Jenifer, Ed.
2006-01-01
"Forum Focus" was a semi-annual magazine of the Business-Higher Education Forum (BHEF) that featured articles on the role of business and higher education on significant issues affecting the P-16 education system. The magazine typically focused on themes featured at the most recently held semi-annual Forum meeting at the time of publication.…
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
ERIC Educational Resources Information Center
Brooks, Candice Elaine
2012-01-01
This article discusses the findings of an exploratory qualitative study that examined the influences of individual and collective sociocultural identities on the community involvements and high academic achievement of 10 Black alumni who attended a predominantly White institution between 1985 and 2008. Syntagmatic narrative analysis and…
ERIC Educational Resources Information Center
Lorch, Robert F., Jr.; Lorch, Elizabeth P.; Freer, Benjamin Dunham; Dunlap, Emily E.; Hodell, Emily C.; Calderhead, William J.
2014-01-01
Students (n = 1,069) from 60 4th-grade classrooms were taught the control of variables strategy (CVS) for designing experiments. Half of the classrooms were in schools that performed well on a state-mandated test of science achievement, and half were in schools that performed relatively poorly. Three teaching interventions were compared: an…
ERIC Educational Resources Information Center
Wurst, Christian; Smarkola, Claudia; Gaffney, Mary Anne
2008-01-01
Three years of graduating business honors cohorts in a large urban university were sampled to determine whether the introduction of ubiquitous laptop computers into the honors program contributed to student achievement, student satisfaction and constructivist teaching activities. The first year cohort consisted of honors students who did not have…
Guijarro-Herraiz, Carlos; Masana-Marin, Luis; Galve, Enrique; Cordero-Fort, Alberto
2014-01-01
Reducing low density lipoprotein-cholesterol (LDL-c) is the main lipid goal of treatment for patients with very high cardiovascular risk. In these patients the therapeutic goal is to achieve a LDL-c lower than 70 mg/dL, as recommended by the guidelines for cardiovascular prevention commonly used in Spain and Europe. However, the degree of achieving these objectives in this group of patients is very low. This article describes the prevalence of the problem and the causes that motivate it. Recommendations and tools that can facilitate the design of an optimal treatment strategy for achieving the goals are also given. In addition, a new tool with a simple algorithm that can allow these very high risk patients to achieve the goals "in two-steps", i.e., with only two doctor check-ups, is presented. PMID:25048471
ERIC Educational Resources Information Center
Mudhovozi, P.; Gumani, M.; Maunganidze, L.; Sodi, T.
2010-01-01
The study explores the attribution styles of in-group and out-group members. Eighty-four (42 female and 42 male) undergraduate students were randomly selected from the Faculty of Education at an institution of higher learning in Zimbabwe. A questionnaire was used to capture the opinions of the participants. The data was analysed using the…
ERIC Educational Resources Information Center
James, Matthew R.
2009-01-01
Leal Filho, MacDermot, and Padgam (1996) contended that post-secondary institutions are well suited to take on leadership responsibilities for society's environmental protection. Higher education has the unique academic freedom to engage in critical thinking and bold experimentation in environmental sustainability (Cortese, 2003). Although…
ERIC Educational Resources Information Center
Mingle, James R., Ed.; Rodriguez, Esther M., Ed.
This report describes initiatives of higher education boards to provide equal educational opportunities for minority students in the following states: (1) Arizona; (2) Colorado; (3) Illinois; (4) Massachusetts; (5) Montana; (6) New York; (7) Ohio; and (8) Tennessee. Evidence of school completion, academic preparation, college participation rates,…
ERIC Educational Resources Information Center
Houston, Don
2010-01-01
While the past two decades have seen significant expansion and harmonisation of quality assurance mechanisms in higher education, there is limited evidence of positive effects on the quality of core processes of teaching and learning. The paradox of the separation of assurance from improvement is explored. A shift in focus from surveillance to…
ERIC Educational Resources Information Center
Jackson, Norman; Ward, Rob
2004-01-01
This article addresses the challenge of developing new conceptual knowledge to help us make better sense of the way that higher education is approaching the "problem" of representing (documenting, certifying and communicating by other means) students' learning for the super-complex world described by Barnett (2000b). The current UK solution to…
Chakraborty, Mohua; Ghosh, Sankar Kumar
2015-04-01
Efficacy of cytochrome c oxidase subunit I (COI) DNA barcode in higher taxon assignment is still under debate in spite of several attempts, using the conventional DNA barcoding methods, to assign higher taxa. Here we try to understand whether nucleotide and amino acid sequence in COI gene carry sufficient information to assign species to their higher taxonomic rank, using 160 species of Indian freshwater fishes. Our results reveal that with increase in the taxonomic rank, sequence conservation decreases for both nucleotides and amino acids. Order level exhibits lowest conservation with 50% of the nucleotides and amino acids being conserved. Among the variable sites, 30-50% were found to carry high information content within an order, while it was 70-80% within a family and 80-99% within a genus. High information content shows sites with almost conserved sequence but varying at one or two locations, which can be due to variations at species or population level. Thus, the potential of COI gene in higher taxon assignment is revealed with validation of ample inherent signals latent in the gene. PMID:24409929
ERIC Educational Resources Information Center
New York City Board of Education, Brooklyn, NY. Office of Research, Evaluation, and Assessment.
A final evaluation was conducted in the 1989-90 school year of New York City (New York) Board of Education's project, Higher Achievement and Improvement Through Instruction with Computers and Scholarly Transition and Resource Systems (HAITI STARS). The project served 524 limited-English-proficient Spanish-speaking students at Far Rockaway High…
ERIC Educational Resources Information Center
Augustin, Marc A.
The Higher Achievement and Improvement Through Instruction with Computers and Scholarly Transition And Resource Systems program (Project HAITI STARS), a federally-funded bilingual education program, served 425 students of limited English proficiency at three high schools in New York City during its fifth contract year. Students received…
Tavares, Eveline Q P; De Souza, Amanda P; Buckeridge, Marcos S
2015-07-01
Cell-wall recalcitrance to hydrolysis still represents one of the major bottlenecks for second-generation bioethanol production. This occurs despite the development of pre-treatments, the prospect of new enzymes, and the production of transgenic plants with less-recalcitrant cell walls. Recalcitrance, which is the intrinsic resistance to breakdown imposed by polymer assembly, is the result of inherent limitations in its three domains. These consist of: (i) porosity, associated with a pectin matrix impairing trafficking through the wall; (ii) the glycomic code, which refers to the fine-structural emergent complexity of cell-wall polymers that are unique to cells, tissues, and species; and (iii) cellulose crystallinity, which refers to the organization in micro- and/or macrofibrils. One way to circumvent recalcitrance could be by following cell-wall hydrolysis strategies underlying plant endogenous mechanisms that are optimized to precisely modify cell walls in planta. Thus, the cell-wall degradation that occurs during fruit ripening, abscission, storage cell-wall mobilization, and aerenchyma formation are reviewed in order to highlight how plants deal with recalcitrance and which are the routes to couple prospective enzymes and cocktail designs with cell-wall features. The manipulation of key enzyme levels in planta can help achieving biologically pre-treated walls (i.e. less recalcitrant) before plants are harvested for bioethanol production. This may be helpful in decreasing the costs associated with producing bioethanol from biomass. PMID:25922489
ERIC Educational Resources Information Center
Baran, Bahar; Kiliç, Eylem
2015-01-01
The purpose of this study is to analyze three separate constructs (demographics, study habits, and technology familiarity) that can be used to identify university students' characteristics and the relationship between each of these constructs with student achievement. A survey method was used for the current study, and the participants included…
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. PMID:27586067
ERIC Educational Resources Information Center
What Works Clearinghouse, 2014
2014-01-01
This study of 952 fifth and sixth graders in Washington, DC, and Alexandria, Virginia, found that students who were offered the "Higher Achievement" program had higher test scores in mathematical problem solving and were more likely to be admitted to and attend private competitive high schools. "Higher Achievement" is a…
Salfity, M.F; Huntley, J.M; Graves, M.J; Marklund, O; Cusack, R; Beauregard, D.A
2005-01-01
Phase contrast magnetic resonance velocity imaging is a powerful technique for quantitative in vivo blood flow measurement. Current practice normally involves restricting the sensitivity of the technique so as to avoid the problem of the measured phase being ‘wrapped’ onto the range −π to +π. However, as a result, dynamic range and signal-to-noise ratio are sacrificed. Alternatively, the true phase values can be estimated by a phase unwrapping process which consists of adding integral multiples of 2π to the measured wrapped phase values. In the presence of noise and data undersampling, the phase unwrapping problem becomes non-trivial. In this paper, we investigate the performance of three different phase unwrapping algorithms when applied to three-dimensional (two spatial axes and one time axis) phase contrast datasets. A simple one-dimensional temporal unwrapping algorithm, a more complex and robust three-dimensional unwrapping algorithm and a novel velocity encoding unwrapping algorithm which involves unwrapping along a fourth dimension (the ‘velocity encoding’ direction) are discussed, and results from the three are presented and compared. It is shown that compared to the traditional approach, both dynamic range and signal-to-noise ratio can be increased by a factor of up to five times, which demonstrates considerable promise for a possible eventual clinical implementation. The results are also of direct relevance to users of any other technique delivering time-varying two-dimensional phase images, such as dynamic speckle interferometry and synthetic aperture radar. PMID:16849270
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
El-Qulity, Said Ali; Mohamed, Ali Wagdy
2016-01-01
This paper proposes a nonlinear integer goal programming model (NIGPM) for solving the general problem of admission capacity planning in a country as a whole. The work aims to satisfy most of the required key objectives of a country related to the enrollment problem for higher education. The system general outlines are developed along with the solution methodology for application to the time horizon in a given plan. The up-to-date data for Saudi Arabia is used as a case study and a novel evolutionary algorithm based on modified differential evolution (DE) algorithm is used to solve the complexity of the NIGPM generated for different goal priorities. The experimental results presented in this paper show their effectiveness in solving the admission capacity for higher education in terms of final solution quality and robustness. PMID:26819583
Rieger-Fackeldey, Esther; Sindelar, Richard; Jonzon, Anders; Schulze, Andreas; Sedin, Gunnar
2005-01-01
Background Inhibition of phrenic nerve activity (PNA) can be achieved when alveolar ventilation is adequate and when stretching of lung tissue stimulates mechanoreceptors to inhibit inspiratory activity. During mechanical ventilation under different lung conditions, inhibition of PNA can provide a physiological setting at which ventilatory parameters can be compared and related to arterial blood gases and pH. Objective To study lung mechanics and gas exchange at inhibition of PNA during controlled gas ventilation (GV) and during partial liquid ventilation (PLV) before and after lung lavage. Methods Nine anaesthetised, mechanically ventilated young cats (age 3.8 ± 0.5 months, weight 2.3 ± 0.1 kg) (mean ± SD) were studied with stepwise increases in peak inspiratory pressure (PIP) until total inhibition of PNA was attained before lavage (with GV) and after lavage (GV and PLV). Tidal volume (Vt), PIP, oesophageal pressure and arterial blood gases were measured at inhibition of PNA. One way repeated measures analysis of variance and Student Newman Keuls-tests were used for statistical analysis. Results During GV, inhibition of PNA occurred at lower PIP, transpulmonary pressure (Ptp) and Vt before than after lung lavage. After lavage, inhibition of inspiratory activity was achieved at the same PIP, Ptp and Vt during GV and PLV, but occurred at a higher PaCO2 during PLV. After lavage compliance at inhibition was almost the same during GV and PLV and resistance was lower during GV than during PLV. Conclusion Inhibition of inspiratory activity occurs at a higher PaCO2 during PLV than during GV in cats with surfactant-depleted lungs. This could indicate that PLV induces better recruitment of mechanoreceptors than GV. PMID:15748281
Otsuka, Mitsuo; Kawahara, Taisuke; Isaka, Tadao
2016-03-01
This study aimed to clarify the contribution of differences in step length and step rate to sprinting velocity in an athletic race compared with speed training. Nineteen well-trained male and female sprinters volunteered to participate in this study. Sprinting motions were recorded for each sprinter during both 100-m races and speed training (60-, 80-, and 100-m dash from a block start) for 14 days before the race. Repeated-measures analysis of covariance was used to compare the step characteristics and sprinting velocity between race and speed training, adjusted for covariates including race-training differences in the coefficients of restitution of the all-weather track, wind speed, air temperature, and sex. The average sprinting velocity to the 50-m mark was significantly greater in the race than in speed training (8.26 ± 0.22 m·s vs. 8.00 ± 0.70 m·s, p < 0.01). Although no significant difference was seen in the average step length to the 50-m mark between the race and speed training (1.81 ± 0.09 m vs. 1.80 ± 0.09 m, p = 0.065), the average step rate was significantly greater in the race than in speed training (4.56 ± 0.17 Hz vs. 4.46 ± 0.13 Hz, p < 0.01). These findings suggest that sprinters achieve higher sprinting velocity and can run with higher exercise intensity and more rapid motion during a race than during speed training, even if speed training was performed at perceived high intensity. PMID:26907837
Lees, J.R.
1983-01-01
This study was a systematic replication of a study by Stagliano (1981). Additional hypotheses concerning pretest, student major, and student section variance were tested. Achievement in energy knowledge and conservation attitudes attained by (a) lecture-discussion enriched with the Energy-Environment Simulator and (b) lecture-discussion methods of instruction were measured. Energy knowledge was measured on the Energy Knowledge Assessment Test (EKAT), and attitudes were measured on the Youth Energy Survey (YES), the Lecture-discussion simulation (LDS) used a two hour out-of-class activity in debriefing. The population consisted of 142 college student volunteers randomly selected, and assigned to one of two groups of 71 students for each treatment. Stagliano used three groups (n = 35), one group receiving an energy-game treatment. Both studies used the pretest-posttest true experimental design. The present study included 28 hypotheses, eight of which were found to be significant. Stagliano used 12 hypotheses, all of which were rejected. The present study hypothesized that students who received the LDS treatment would obtain significantly higher scores on the EKAT and the YES instruments. Results showed that significance was found (alpha level .05) on the EKAT and also found on the YES total subscale when covaried for effects of pretest, student major, and student section. When covarying the effects of pretest scores only, significance was found on the EKAT. All YES hypotheses were rejected.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers. PMID:27610303
ERIC Educational Resources Information Center
Waldron, Chad H.
2008-01-01
The research study examined whether a difference existed between the reading achievement scores of an experimental group and a control group in standardized reading achievement. This difference measured the effect of systematic oral reading fluency instruction with repeated readings. Data from the 4Sight Pennsylvania Benchmark Reading Assessments…
ERIC Educational Resources Information Center
Chudowsky, Naomi; Chudowsky, Victor; Kober, Nancy
2009-01-01
This report is the first in a series of reports describing results from the Center on Education Policy's (CEP's) third annual analysis of state testing data. The report provides an update on student performance at the proficient level of achievement, and for the first time, includes data about student performance at the advanced and basic levels.…
ERIC Educational Resources Information Center
Clune, William H.; White, Paula A.
1992-01-01
Transcript data were analyzed to determine changes in course taking among graduates of high schools including mostly lower achieving students in California, Florida, Missouri, and Pennsylvania, which adopted high graduation requirements in the 1980s. Average credits per student increased in all academic subjects, as did the courses' difficulty…
NASA Astrophysics Data System (ADS)
Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan
2016-06-01
Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.
ERIC Educational Resources Information Center
Briddell, Andrew
2013-01-01
This study of 1,974 fifth grade students investigated potential relationships between writing process-based instruction practices and higher-order thinking measured by a standardized literacy assessment. Writing process is defined as a highly complex, socio-cognitive process that includes: planning, text production, review, metacognition, writing…
ERIC Educational Resources Information Center
Kennedy, Gary J.
2013-01-01
This essay proposes that much of what constitutes the quality of an institution of higher education is the quality of the students attending the institution. This quality, however, is conceptualized to extend beyond that of academic ability. Specifically, three propositions are considered. First, it is proposed that a core construct of student…
2012-01-01
Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK) in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF) and the World Health Organisation (WHO). Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21%) and pneumocystosis (19%). Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm) are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice. PMID:22260242
ERIC Educational Resources Information Center
MacKay, Irene Douglas
The purpose of this study was to investigate the relationship between a student's confidence in his computational procedures for each of the four basic arithmetic operations and the student's achievement on computation problems. All of the students in grades 5 through 8 in one school system (a total of 6186 students) were given a questionnaire to…
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-01-01
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu
2013-01-01
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123
Attractiveness and School Achievement
ERIC Educational Resources Information Center
Salvia, John; And Others
1977-01-01
The purpose of this study was to ascertain the relationship between rated attractiveness and two measures of school performance. Attractive children received significantly higher report cards and, to some degree, higher achievement test scores than their unattractive peers. (Author)
High Rate Pulse Processing Algorithms for Microcalorimeters
NASA Astrophysics Data System (ADS)
Tan, Hui; Breus, Dimitry; Hennig, Wolfgang; Sabourov, Konstantin; Collins, Jeffrey W.; Warburton, William K.; Bertrand Doriese, W.; Ullom, Joel N.; Bacrania, Minesh K.; Hoover, Andrew S.; Rabin, Michael W.
2009-12-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensors can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small in order to maintain good energy resolution, and pulse decay times are normally on the order of milliseconds due to slow thermal relaxation. Therefore, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. For large arrays, however, as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of waveform data to a host computer for post-processing. In this paper, we present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in readout electronics that we are also currently developing, is to achieve sufficiently good energy resolution for most applications while being: a) simple enough to be implemented in the readout electronics; and, b) capable of processing overlapping pulses, and thus achieving much higher output count rates than those achieved by existing algorithms. Details of our algorithms are presented, and their performance is compared to that of the "optimal filter" that is currently the predominantly used pulse processing algorithm in the cryogenic-detector community.
High rate pulse processing algorithms for microcalorimeters
Rabin, Michael; Hoover, Andrew S; Bacrania, Mnesh K; Tan, Hui; Breus, Dimitry; Henning, Wolfgang; Sabourov, Konstantin; Collins, Jeff; Warburton, William K; Dorise, Bertrand; Ullom, Joel N
2009-01-01
It has been demonstrated that microcalorimeter spectrometers based on superconducting transition-edge-sensor can readily achieve sub-100 eV energy resolution near 100 keV. However, the active volume of a single microcalorimeter has to be small to maintain good energy resolution, and pulse decay times are normally in the order of milliseconds due to slow thermal relaxation. Consequently, spectrometers are typically built with an array of microcalorimeters to increase detection efficiency and count rate. Large arrays, however, require as much pulse processing as possible to be performed at the front end of the readout electronics to avoid transferring large amounts of waveform data to a host computer for processing. In this paper, they present digital filtering algorithms for processing microcalorimeter pulses in real time at high count rates. The goal for these algorithms, which are being implemented in the readout electronics that they are also currently developing, is to achieve sufficiently good energy resolution for most applications while being (a) simple enough to be implemented in the readout electronics and (b) capable of processing overlapping pulses and thus achieving much higher output count rates than the rates that existing algorithms are currently achieving. Details of these algorithms are presented, and their performance was compared to that of the 'optimal filter' that is the dominant pulse processing algorithm in the cryogenic-detector community.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
ERIC Educational Resources Information Center
Hartley, Tricia
2009-01-01
National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…
ERIC Educational Resources Information Center
Walberg, Herbert J.
2010-01-01
For the last half century, higher spending and many modern reforms have failed to raise the achievement of students in the United States to the levels of other economically advanced countries. A possible explanation, says Herbert Walberg, is that much current education theory is ill informed about scientific psychology, often drawing on fads and…
Robust facial expression recognition algorithm based on local metric learning
NASA Astrophysics Data System (ADS)
Jiang, Bin; Jia, Kebin
2016-01-01
In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.
Parallel algorithms for dynamically partitioning unstructured grids
Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.
1994-10-01
Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.
Ehsan, Shoaib; Kanwal, Nadia; Clark, Adrian F; McDonald-Maier, Klaus D
2012-01-01
Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance. PMID:21712160
Rempp, Florian; Mahler, Guenter; Michel, Mathias
2007-09-15
We introduce a scheme to perform the cooling algorithm, first presented by Boykin et al. in 2002, for an arbitrary number of times on the same set of qbits. We achieve this goal by adding an additional SWAP gate and a bath contact to the algorithm. This way one qbit may repeatedly be cooled without adding additional qbits to the system. By using a product Liouville space to model the bath contact we calculate the density matrix of the system after a given number of applications of the algorithm.
Linear-scaling and parallelisable algorithms for stochastic quantum chemistry
NASA Astrophysics Data System (ADS)
Booth, George H.; Smart, Simon D.; Alavi, Ali
2014-07-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimised paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelisation which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the methods introduce new concepts required for algorithmic efficiency. In this paper, we explore these concepts and detail an algorithm used for Full Configuration Interaction Quantum Monte Carlo (FCIQMC), which is implemented and available in MOLPRO and as a standalone code, and is designed for high-level parallelism and linear-scaling with walker number. Many of the algorithms are also in use in, or can be transferred to, other stochastic quantum chemical methods and implementations. We apply these algorithms to the strongly correlated chromium dimer to demonstrate their efficiency and parallelism.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-04-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
[Deregulation and Higher Education].
ERIC Educational Resources Information Center
Business Officer, 1982
1982-01-01
The extent to which the Reagan Administration has achieved its deregulation goals in the area of higher education is addressed in three articles: "Deregulation and Higher Education: The View a Year Later" (Sheldon Elliot Steinbach); "Student Financial Aid Deregulation: Rhetoric or Reality?" (Robin E. Jenkins); and "Administration Reform of Civil…
General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice
NASA Astrophysics Data System (ADS)
Wasfy, Wael; Zheng, Hong
Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.
Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
Modeling Achievement by Measuring the Enacted Instruction
ERIC Educational Resources Information Center
Walkup, John R.; Jones, Ben S.
2008-01-01
This article presents a mathematical algorithm that relates student achievement with directly observable, quantifiable teacher and student behaviors, producing a modified form of the Walberg model. The algorithm (1) expands the measurable factors that comprise the quality of instruction in a linear basis of research-based teaching components and…
Solution algorithms for the two-dimensional Euler equations on unstructured meshes
NASA Technical Reports Server (NTRS)
Whitaker, D. L.; Slack, David C.; Walters, Robert W.
1990-01-01
The objective of the study was to analyze implicit techniques employed in structured grid algorithms for solving two-dimensional Euler equations and extend them to unstructured solvers in order to accelerate convergence rates. A comparison is made between nine different algorithms for both first-order and second-order accurate solutions. Higher-order accuracy is achieved by using multidimensional monotone linear reconstruction procedures. The discussion is illustrated by results for flow over a transonic circular arc.
Parental Involvement and Academic Achievement
ERIC Educational Resources Information Center
Goodwin, Sarah Christine
2015-01-01
This research study examined the correlation between student achievement and parent's perceptions of their involvement in their child's schooling. Parent participants completed the Parent Involvement Project Parent Questionnaire. Results slightly indicated parents of students with higher level of achievement perceived less demand or invitations…
Using Strassen's algorithm to accelerate the solution of linear systems
NASA Technical Reports Server (NTRS)
Bailey, David H.; Lee, King; Simon, Horst D.
1990-01-01
Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.
Efficient maximum entropy algorithms for electronic structure
Silver, R.N.; Roeder, H.; Voter, A.F.; Kress, J.D.
1996-04-01
Two Chebyshev recursion methods are presented for calculations with very large sparse Hamiltonians, the kernel polynomial method (KPM) and the maximum entropy method (MEM). If limited statistical accuracy and energy resolution are acceptable, they provide linear scaling methods for the calculation of physical properties involving large numbers of eigenstates such as densities of states, spectral functions, thermodynamics, total energies for Monte Carlo simulations and forces for molecular dynamics. KPM provides a uniform approximation to a DOS, with resolution inversely proportional to the number of Chebyshev moments, while MEM can achieve significantly higher, but non-uniform, resolution at the risk of possible artifacts. This paper emphasizes efficient algorithms.
NASA Astrophysics Data System (ADS)
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Measuring and Recording Student Achievement
ERIC Educational Resources Information Center
Universities UK, 2004
2004-01-01
The Measuring and Recording Student Achievement Scoping Group was established by Universities UK and the Standing Conference of Principals (SCOP), with the support of the Higher Education Funding Council for England (HEFCE) in October 2003 to review the recommendations from the UK Government White Paper "The Future of Higher Education" relating…
Steganographic system based on higher-order statistics
NASA Astrophysics Data System (ADS)
Tzschoppe, Roman; Baeuml, Robert; Huber, Johannes; Kaup, Andre
2003-06-01
Universal blind steganalysis attempts to detect steganographic data without knowledge about the applied steganographic system. Farid proposed such a detection algorithm based on higher-order statistics for separating original images from stego images. His method shows an astonishing performance on current steganographic schemes. Starting from the statistical approach in Farid's algorithm, we investigate the well known steganographic tool Jsteg as well as a newer approach proposed by Eggers et al., which relies on histogram-preserving data mapping. Both schemes show weaknesses leading to a certain detectability. Further analysis shows which statistic characteristics make both schemes vulnerable. Based on these results, the histogram preserving approach is enhanced such that it achieves perfect security with respect to Farid's algorithm.
Sobel, E.; Lange, K.; O`Connell, J.R.
1996-12-31
Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.
Preconditioned quantum linear system algorithm.
Clader, B D; Jacobs, B C; Sprouse, C R
2013-06-21
We describe a quantum algorithm that generalizes the quantum linear system algorithm [Harrow et al., Phys. Rev. Lett. 103, 150502 (2009)] to arbitrary problem specifications. We develop a state preparation routine that can initialize generic states, show how simple ancilla measurements can be used to calculate many quantities of interest, and integrate a quantum-compatible preconditioner that greatly expands the number of problems that can achieve exponential speedup over classical linear systems solvers. To demonstrate the algorithm's applicability, we show how it can be used to compute the electromagnetic scattering cross section of an arbitrary target exponentially faster than the best classical algorithm. PMID:23829722
Research on algorithm for infrared hyperspectral imaging Fourier transform spectrometer technology
NASA Astrophysics Data System (ADS)
Wan, Lifang; Chen, Yan; Liao, Ningfang; Lv, Hang; He, Shufang; Li, Yasheng
2015-08-01
This paper reported the algorithm for Infrared Hyperspectral Imaging Radiometric Spectrometer Technology. Six different apodization functions are been used and compared, and the phase corrected technologies of Forman is researched and improved, fast fourier transform(FFT)is been used in this paper instead of the linear convolution to reduce the quantity of computation.The interferograms is achieved by the Infrared Hyperspectral Imaging Radiometric Spectrometer which are corrected and rebuilded by the improved algorithm, this algorithm reduce the noise and accelerate the computing speed with the higher accuracy of spectrometers.
Optical rate sensor algorithms
NASA Technical Reports Server (NTRS)
Uhde-Lacovara, Jo A.
1989-01-01
Optical sensors, in particular Charge Coupled Device (CCD) arrays, will be used on Space Station to track stars in order to provide inertial attitude reference. Algorithms are presented to derive attitude rate from the optical sensors. The first algorithm is a recursive differentiator. A variance reduction factor (VRF) of 0.0228 was achieved with a rise time of 10 samples. A VRF of 0.2522 gives a rise time of 4 samples. The second algorithm is based on the direct manipulation of the pixel intensity outputs of the sensor. In 1-dimensional simulations, the derived rate was with 0.07 percent of the actual rate in the presence of additive Gaussian noise with a signal to noise ratio of 60 dB.
Sharing Leadership Responsibilities Results in Achievement Gains
ERIC Educational Resources Information Center
Armistead, Lew
2010-01-01
Collective, not individual, leadership in schools has a greater impact on student achievement; when principals and teachers share leadership responsibilities, student achievement is higher; and schools having high student achievement also display a vision for student achievement and teacher growth. Those are just a few of the insights into school…
Graded Achievement, Tested Achievement, and Validity
ERIC Educational Resources Information Center
Brookhart, Susan M.
2015-01-01
Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Gregory, Dennis E.
Decisions made by federal and state courts during 1983 concerning higher education are reported in this chapter. Issues of employment and the treatment of students underlay the bulk of the litigation. Specific topics addressed in these and other cases included federal authority to enforce regulations against age discrimination and to revoke an…
ERIC Educational Resources Information Center
Hendrickson, Robert M.
Litigation in 1987 was very brisk with an increase in the number of higher education cases reviewed. Cases discussed in this chapter are organized under four major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining and denial of employee benefits; (3)…
ERIC Educational Resources Information Center
Hendrickson, Robert M.; Finnegan, Dorothy E.
The higher education case law in 1988 is extensive. Cases discussed in this chapter are organized under five major topics: (1) intergovernmental relations; (2) employees, involving discrimination claims, tenured and nontenured faculty, collective bargaining, and denial of employee benefits; (3) students, involving admissions, financial aid, First…
ERIC Educational Resources Information Center
Hendrickson, Robert M.
This eighth chapter of "The Yearbook of School Law, 1986" summarizes and analyzes over 330 state and federal court cases litigated in 1985 in which institutions of higher education were involved. Among the topics examined were relationships between postsecondary institutions and various governmental agencies; discrimination in the employment of…
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Speckle reduction via higher order total variation approach.
Wensen Feng; Hong Lei; Yang Gao
2014-04-01
Multiplicative noise (also known as speckle) reduction is a prerequisite for many image-processing tasks in coherent imaging systems, such as the synthetic aperture radar. One approach extensively used in this area is based on total variation (TV) regularization, which can recover significantly sharp edges of an image, but suffers from the staircase-like artifacts. In order to overcome the undesirable deficiency, we propose two novel models for removing multiplicative noise based on total generalized variation (TGV) penalty. The TGV regularization has been mathematically proven to be able to eliminate the staircasing artifacts by being aware of higher order smoothness. Furthermore, an efficient algorithm is developed for solving the TGV-based optimization problems. Numerical experiments demonstrate that our proposed methods achieve state-of-the-art results, both visually and quantitatively. In particular, when the image has some higher order smoothness, our methods outperform the TV-based algorithms. PMID:24808350
Parallel algorithms and architectures for the manipulator inertia matrix
Amin-Javaheri, M.
1989-01-01
Several parallel algorithms and architectures to compute the manipulator inertia matrix in real time are proposed. An O(N) and an O(log{sub 2}N) parallel algorithm based upon recursive computation of the inertial parameters of sets of composite rigid bodies are formulated. One- and two-dimensional systolic architectures are presented to implement the O(N) parallel algorithm. A cube architecture is employed to implement the diagonal element of the inertia matrix in O(log{sub 2}N) time and the upper off-diagonal elements in O(N) time. The resulting K{sub 1}O(N) + K{sub 2}O(log{sub 2}N) parallel algorithm is more efficient for a cube network implementation. All the architectural configurations are based upon a VLSI Robotics Processor exploiting fine-grain parallelism. In evaluation all the architectural configurations, significant performance parameters such as I/O time and idle time due to processor synchronization as well as CPU utilization and on-chip memory size are fully included. The O(N) and O(log{sub 2}N) parallel algorithms adhere to the precedence relationships among the processors. In order to achieve a higher speedup factor; however, parallel algorithms in conjunction with Non-Strict Computational Models are devised to relax interprocess precedence, and as a result, to decrease the effective computational delays. The effectiveness of the Non-strict Computational Algorithms is verified by computer simulations, based on a PUMA 560 robot manipulator. It is demonstrated that a combination of parallel algorithms and architectures results in a very effective approach to achieve real-time response for computing the manipulator inertia matrix.
Pedestrian navigation algorithm based on MIMU with building heading/magnetometer
NASA Astrophysics Data System (ADS)
Meng, Xiang-bin; Pan, Xian-fei; Chen, Chang-hao; Hu, Xiao-ping
2016-01-01
In order to improve the accuracy of the low-cost MIMU Inertial navigation system in the application of pedestrian navigation.And to reduce the effect of the heading error because of the low accuracy of the component of MIMU. A novel algorithm was put forward, which fusing the building heading constraint information and the magnetic heading information to achieve more advantages. We analysed the application condition and the modified effect of building heading and magnetic heading. Then experiments were conducted in indoor environment. The results show that the algorithm proposed has a better effect to restrict the heading drift problem and to achieve a higher navigation precision.
Quantum defragmentation algorithm
Burgarth, Daniel; Giovannetti, Vittorio
2010-08-15
In this addendum to our paper [D. Burgarth and V. Giovannetti, Phys. Rev. Lett. 99, 100501 (2007)] we prove that during the transformation that allows one to enforce control by relaxation on a quantum system, the ancillary memory can be kept at a finite size, independently from the fidelity one wants to achieve. The result is obtained by introducing the quantum analog of defragmentation algorithms which are employed for efficiently reorganizing classical information in conventional hard disks.
Achieving Communicative Competence: The Role of Higher Education.
ERIC Educational Resources Information Center
Fatt, James Poon Teng
1991-01-01
A study investigated the communicative competencies required in English as a Second Language (ESL) by 200 business and accounting students at Nanyang Technological Institute (Singapore) and explored a communicatively based ESL curriculum design. Student attitudes about the current linguistically based ESL program were also examined. Results are…
Time Management and Academic Achievement of Higher Secondary Students
ERIC Educational Resources Information Center
Cyril, A. Vences
2015-01-01
The only thing, which can't be changed by man, is time. One cannot get back time lost or gone Nothing can be substituted for time. Time management is actually self management. The skills that people need to manage others are the same skills that are required to manage themselves. The purpose of the present study was to explore the relation between…
Higher Order Thinking Skills: Challenging All Students to Achieve
ERIC Educational Resources Information Center
Williams, R. Bruce
2007-01-01
Explicit instruction in thinking skills must be a priority goal of all teachers. In this book, the author presents a framework of the five Rs: Relevancy, Richness, Relatedness, Rigor, and Recursiveness. The framework serves to illuminate instruction in critical and creative thinking skills for K-12 teachers across content areas. Each chapter…
Middle Grades: Quality Teaching Equals Higher Student Achievement. Research Brief
ERIC Educational Resources Information Center
Bottoms, Gene; Hertl, Jordan; Mollette, Melinda; Patterson, Lenora
2014-01-01
The middles grades are critical to public school systems and our nation's economy. It's the make-or-break point in students' futures. Studies repeatedly show when students are not engaged and lose interest in the middle grades, they are likely to fall behind in ninth grade and later drop out of school. When this happens, the workforce suffers, and…
Messy genetic algorithms: Recent developments
Kargupta, H.
1996-09-01
Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.
A novel image encryption algorithm using chaos and reversible cellular automata
NASA Astrophysics Data System (ADS)
Wang, Xingyuan; Luan, Dapeng
2013-11-01
In this paper, a novel image encryption scheme is proposed based on reversible cellular automata (RCA) combining chaos. In this algorithm, an intertwining logistic map with complex behavior and periodic boundary reversible cellular automata are used. We split each pixel of image into units of 4 bits, then adopt pseudorandom key stream generated by the intertwining logistic map to permute these units in confusion stage. And in diffusion stage, two-dimensional reversible cellular automata which are discrete dynamical systems are applied to iterate many rounds to achieve diffusion on bit-level, in which we only consider the higher 4 bits in a pixel because the higher 4 bits carry almost the information of an image. Theoretical analysis and experimental results demonstrate the proposed algorithm achieves a high security level and processes good performance against common attacks like differential attack and statistical attack. This algorithm belongs to the class of symmetric systems.
Analysis and an image recovery algorithm for ultrasonic tomography system
NASA Technical Reports Server (NTRS)
Jin, Michael Y.
1994-01-01
The problem of an ultrasonic reflectivity tomography is similar to that of a spotlight-mode aircraft Synthetic Aperture Radar (SAR) system. The analysis for a circular path spotlight mode SAR in this paper leads to the insight of the system characteristics. It indicates that such a system when operated in a wide bandwidth is capable of achieving the ultimate resolution; one quarter of the wavelength of the carrier frequency. An efficient processing algorithm based on the exact two dimensional spectrum is presented. The results of simulation indicate that the impulse responses meet the predicted resolution performance. Compared to an algorithm previously developed for the ultrasonic reflectivity tomography, the throughput rate of this algorithm is about ten times higher.
An ellipse detection algorithm based on edge classification
NASA Astrophysics Data System (ADS)
Yu, Liu; Chen, Feng; Huang, Jianming; Wei, Xiangquan
2015-12-01
In order to enhance the speed and accuracy of ellipse detection, an ellipse detection algorithm based on edge classification is proposed. Too many edge points are removed by making edge into point in serialized form and the distance constraint between the edge points. It achieves effective classification by the criteria of the angle between the edge points. And it makes the probability of randomly selecting the edge points falling on the same ellipse greatly increased. Ellipse fitting accuracy is significantly improved by the optimization of the RED algorithm. It uses Euclidean distance to measure the distance from the edge point to the elliptical boundary. Experimental results show that: it can detect ellipse well in case of edge with interference or edges blocking each other. It has higher detecting precision and less time consuming than the RED algorithm.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Lomax, Harvard
1987-01-01
The past decade has seen considerable activity in algorithm development for the Navier-Stokes equations. This has resulted in a wide variety of useful new techniques. Some examples for the numerical solution of the Navier-Stokes equations are presented, divided into two parts. One is devoted to the incompressible Navier-Stokes equations, and the other to the compressible form.
A retrodictive stochastic simulation algorithm
Vaughan, T.G. Drummond, P.D.; Drummond, A.J.
2010-05-20
In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.
NASA Astrophysics Data System (ADS)
El-Guibaly, Fayez; Sabaa, A.
1996-10-01
In this paper, we introduce modifications on the classic CORDIC algorithm to reduce the number of iterations, and hence the rounding noise. The modified algorithm needs, at most, half the number of iterations to achieve the same accuracy as the classical one. The modifications are applicable to linear, circular and hyperbolic CORDIC in both vectoring and rotation modes. Simulations illustrate the effect of the new modifications.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Network representations of knowledge about chemical equilibrium: Variations with achievement
NASA Astrophysics Data System (ADS)
Wilson, Janice M.
This study examined variation in the organization of domain-specific knowledge by 50 Year-12 chemistry students and 4 chemistry teachers. The study used nonmetric multidimensional scaling (MDS) and the Pathfinder network-generating algorithm to investigate individual and group differences in student concepts maps about chemical equilibrium. MDS was used to represent the individual maps in two-dimensional space, based on the presence or absence of paired propositional links. The resulting separation between maps reflected degree of hierarchical structure, but also reflected independent measures of student achievement. Pathfinder was then used to produce semantic networks from pooled data from high and low achievement groups using proximity matrices derived from the frequencies of paired concepts. The network constructed from maps of higher achievers (coherence measure = 0.18, linked pairs = 294, and number of subjects = 32) showed greater coherence, more concordance in specific paired links, more important specific conceptual relationships, and greater hierarchical organization than did the network constructed from maps of lower achievers (coherence measure = 0.12, linked pairs = 552, and number of subjects = 22). These differences are interpreted in terms of qualitative variation in knowledge organization by two groups of individuals with different levels of relative expertise (as reflected in achievement scores) concerning the topic of chemical equilibrium. The results suggest that the technique of transforming paired links in concept maps into proximity matrices for input to multivariate analyses provides a suitable methodology for comparing and documenting changes in the organization and structure of conceptual knowledge within and between individual students.
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
Algorithms, games, and evolution
Chastain, Erick; Livnat, Adi; Papadimitriou, Christos; Vazirani, Umesh
2014-01-01
Even the most seasoned students of evolution, starting with Darwin himself, have occasionally expressed amazement that the mechanism of natural selection has produced the whole of Life as we see it around us. There is a computational way to articulate the same amazement: “What algorithm could possibly achieve all this in a mere three and a half billion years?” In this paper we propose an answer: We demonstrate that in the regime of weak selection, the standard equations of population genetics describing natural selection in the presence of sex become identical to those of a repeated game between genes played according to multiplicative weight updates (MWUA), an algorithm known in computer science to be surprisingly powerful and versatile. MWUA maximizes a tradeoff between cumulative performance and entropy, which suggests a new view on the maintenance of diversity in evolution. PMID:24979793
Image enhancement based on edge boosting algorithm
NASA Astrophysics Data System (ADS)
Ngernplubpla, Jaturon; Chitsobhuk, Orachat
2015-12-01
In this paper, a technique for image enhancement based on proposed edge boosting algorithm to reconstruct high quality image from a single low resolution image is described. The difficulty in single-image super-resolution is that the generic image priors resided in the low resolution input image may not be sufficient to generate the effective solutions. In order to achieve a success in super-resolution reconstruction, efficient prior knowledge should be estimated. The statistics of gradient priors in terms of priority map based on separable gradient estimation, maximum likelihood edge estimation, and local variance are introduced. The proposed edge boosting algorithm takes advantages of these gradient statistics to select the appropriate enhancement weights. The larger weights are applied to the higher frequency details while the low frequency details are smoothed. From the experimental results, the significant performance improvement quantitatively and perceptually is illustrated. It can be seen that the proposed edge boosting algorithm demonstrates high quality results with fewer artifacts, sharper edges, superior texture areas, and finer detail with low noise.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to
Comparing Science Achievement Constructs: Targeted and Achieved
ERIC Educational Resources Information Center
Ferrara, Steve; Duncan, Teresa
2011-01-01
This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…
Achieving energy efficiency during collective communications
Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao
2012-09-13
Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although little attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run
Varieties of Achievement Motivation.
ERIC Educational Resources Information Center
Kukla, Andre; Scher, Hal
1986-01-01
A recent article by Nicholls on achievement motivation is criticized on three points: (1) definitions of achievement motives are ambiguous; (2) behavioral consequences predicted do not follow from explicit theoretical assumptions; and (3) Nicholls's account of the relation between his theory and other achievement theories is factually incorrect.…
Motivation and School Achievement.
ERIC Educational Resources Information Center
Maehr, Martin L.; Archer, Jennifer
Addressing the question, "What can be done to promote school achievement?", this paper summarizes the literature on motivation relating to classroom achievement and school effectiveness. Particular attention is given to how values, ideology, and various cultural patterns impinge on classroom performance and serve to enhance motivation to achieve.…
Mobility and Reading Achievement.
ERIC Educational Resources Information Center
Waters, Theresa Z.
A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…
ERIC Educational Resources Information Center
Kirby, John R.
Two studies examined the effectiveness of the PASS (Planning, Attention, Simultaneous, and Successive cognitive processes) theory of intelligence in predicting reading achievement scores of normally achieving children and distinguishing children with reading disabilities from normally achieving children. The first study dealt with predicting…
Strategic Planning for Higher Education.
ERIC Educational Resources Information Center
Kotler, Philip; Murphy, Patrick E.
1981-01-01
The framework necessary for achieving a strategic planning posture in higher education is outlined. The most important benefit of strategic planning for higher education decision makers is that it forces them to undertake a more market-oriented and systematic approach to long- range planning. (Author/MLW)
Acceleration of iterative image restoration algorithms.
Biggs, D S; Andrews, M
1997-03-10
A new technique for the acceleration of iterative image restoration algorithms is proposed. The method is based on the principles of vector extrapolation and does not require the minimization of a cost function. The algorithm is derived and its performance illustrated with Richardson-Lucy (R-L) and maximum entropy (ME) deconvolution algorithms and the Gerchberg-Saxton magnitude and phase retrieval algorithms. Considerable reduction in restoration times is achieved with little image distortion or computational overhead per iteration. The speedup achieved is shown to increase with the number of iterations performed and is easily adapted to suit different algorithms. An example R-L restoration achieves an average speedup of 40 times after 250 iterations and an ME method 20 times after only 50 iterations. An expression for estimating the acceleration factor is derived and confirmed experimentally. Comparisons with other acceleration techniques in the literature reveal significant improvements in speed and stability. PMID:18250863
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
Efficient implementation of Jacobi algorithms and Jacobi sets on distributed memory architectures
Eberlein, P.J. ); Park, H. )
1990-04-01
One-sided methods for implementing Jacobi diagonalization algorithms have been recently proposed for both distributed memory and vector machines. These methods are naturally well suited to distributed memory and vector architectures because of their inherent parallelism and their abundance of vector operations. Also, one-sided methods require substantially less message passing than the two-sided methods, and thus can achieve higher efficiency. The authors describe in detail the use of the one-sided Jacobi rotation as opposed to the rotation used in the Hestenes algorithm; they perceive the difference to have been widely misunderstood. Furthermore, the one-sided algorithm generalizes to other problems such as the nonsymmetric eigenvalue problem while the Hestenes algorithm does not. The authors discuss two new implementations for Jacobi sets for a ring connected array of processors and show their isomorphism to the round-robin ordering.
Excursion-Set-Mediated Genetic Algorithm
NASA Technical Reports Server (NTRS)
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Reactive Collision Avoidance Algorithm
NASA Technical Reports Server (NTRS)
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
A Synthesized Heuristic Task Scheduling Algorithm
Dai, Yanyan; Zhang, Xiangli
2014-01-01
Aiming at the static task scheduling problems in heterogeneous environment, a heuristic task scheduling algorithm named HCPPEFT is proposed. In task prioritizing phase, there are three levels of priority in the algorithm to choose task. First, the critical tasks have the highest priority, secondly the tasks with longer path to exit task will be selected, and then algorithm will choose tasks with less predecessors to schedule. In resource selection phase, the algorithm is selected task duplication to reduce the interresource communication cost, besides forecasting the impact of an assignment for all children of the current task permits better decisions to be made in selecting resources. The algorithm proposed is compared with STDH, PEFT, and HEFT algorithms through randomly generated graphs and sets of task graphs. The experimental results show that the new algorithm can achieve better scheduling performance. PMID:25254244
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
GPUs benchmarking in subpixel image registration algorithm
NASA Astrophysics Data System (ADS)
Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier
2015-05-01
Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.
Higher order stationary subspace analysis
NASA Astrophysics Data System (ADS)
Panknin, Danny; von Bünau, Paul; Kawanabe, Motoaki; Meinecke, Frank C.; Müller, Klaus-Robert
2016-03-01
Non-stationarity in data is an ubiquitous problem in signal processing. The recent stationary subspace analysis procedure (SSA) has enabled to decompose such data into a stationary subspace and a non-stationary part respectively. Algorithmically only weak non- stationarities could be tackled by SSA. The present paper takes the conceptual step generalizing from the use of first and second moments as in SSA to higher order moments, thus defining the proposed higher order stationary subspace analysis procedure (HOSSA). The paper derives the novel procedure and shows simulations. An obvious trade-off between the necessity of estimating higher moments and the accuracy and robustness with which they can be estimated is observed. In an ideal setting of plenty of data where higher moment information is dominating our novel approach can win against standard SSA. However, with limited data, even though higher moments actually dominate the underlying data, still SSA may arrive on par.
Efficient implementations of hyperspectral chemical-detection algorithms
NASA Astrophysics Data System (ADS)
Brett, Cory J. C.; DiPietro, Robert S.; Manolakis, Dimitris G.; Ingle, Vinay K.
2013-10-01
Many military and civilian applications depend on the ability to remotely sense chemical clouds using hyperspectral imagers, from detecting small but lethal concentrations of chemical warfare agents to mapping plumes in the aftermath of natural disasters. Real-time operation is critical in these applications but becomes diffcult to achieve as the number of chemicals we search for increases. In this paper, we present efficient CPU and GPU implementations of matched-filter based algorithms so that real-time operation can be maintained with higher chemical-signature counts. The optimized C++ implementations show between 3x and 9x speedup over vectorized MATLAB implementations.
Heritability of Creative Achievement
ERIC Educational Resources Information Center
Piffer, Davide; Hur, Yoon-Mi
2014-01-01
Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…
Confronting the Achievement Gap
ERIC Educational Resources Information Center
Gardner, David
2007-01-01
This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…
States Address Achievement Gaps.
ERIC Educational Resources Information Center
Christie, Kathy
2002-01-01
Summarizes 2 state initiatives to address the achievement gap: North Carolina's report by the Advisory Commission on Raising Achievement and Closing Gaps, containing an 11-point strategy, and Kentucky's legislation putting in place 10 specific processes. The North Carolina report is available at www.dpi.state.nc.us.closingthegap; Kentucky's…
Wechsler Individual Achievement Test.
ERIC Educational Resources Information Center
Taylor, Ronald L.
1999-01-01
This article describes the Wechsler Individual Achievement Test, a comprehensive measure of achievement for individuals in grades K-12. Eight subtests assess mathematics reasoning, spelling, reading comprehension, numerical operations, listening comprehension, oral expression, and written expression. Its administration, standardization,…
Inverting the Achievement Pyramid
ERIC Educational Resources Information Center
White-Hood, Marian; Shindel, Melissa
2006-01-01
Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…
ERIC Educational Resources Information Center
Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.
The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…
General Achievement Trends: Maryland
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Arkansas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Idaho
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Nebraska
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Colorado
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Iowa
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Hawaii
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Kentucky
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Florida
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Texas
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Oregon
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
General Achievement Trends: Virginia
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
ERIC Educational Resources Information Center
Education Digest: Essential Readings Condensed for Quick Review, 2004
2004-01-01
Is the concept of "honor roll" obsolete? The honor roll has always been a way for schools to recognize the academic achievement of their students. But does it motivate students? In this article, several elementary school principals share their views about honoring student achievement. Among others, Virginia principal Nancy Moga said that students…
ERIC Educational Resources Information Center
Martinez, Paul
The Raising Quality and Achievement Program is a 3-year initiative to support further education (FE) colleges in the United Kingdom in their drive to improve students' achievement and the quality of provision. The program offers the following: (1) quality information and advice; (2) onsite support for individual colleges; (3) help with…
Achieving Perspective Transformation.
ERIC Educational Resources Information Center
Nowak, Jens
Perspective transformation is a consciously achieved state in which the individual's perspective on life is transformed. The new perspective serves as a vantage point for life's actions and interactions, affecting the way life is lived. Three conditions are basic to achieving perspective transformation: (1) "feeling" experience, i.e., getting in…
ERIC Educational Resources Information Center
Abowitz, Kathleen Knight
2011-01-01
Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…
General Achievement Trends: Tennessee
ERIC Educational Resources Information Center
Center on Education Policy, 2009
2009-01-01
This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…
ERIC Educational Resources Information Center
Fletcher, Mike; And Others
1992-01-01
This collection of seven articles examines achievement-based resourcing (ABR), the concept that the funding of educational institutions should be linked to their success in promoting student achievement, with a focus on the application of ABR to postsecondary education in the United Kingdom. The articles include: (1) "Introduction" (Mick…
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Vectorization of algorithms for solving systems of difference equations
Buzbee, B.L.
1981-01-01
Today's fastest computers achieve their highest level of performance when processing vectors. Consequently, considerable effort has been spent in the past decade developing algorithms that can be expressed as operations on vectors. In this paper two types of vector architecture are defined. A discussion is presented on the variation of performance that can occur on a vector processor as a function of algorithm and implementation, the consequences of this variation, and the performance of some basic operators on the two classes of vector architecture. Also discussed is the performance of higher-level operators, including some that should be used with caution. With both types of operators, the implementation of techniques for solving systems of difference equations is discussed. Included are fast Poisson solvers and point, line, and conjugate-gradient techniques. 1 figure.
Multithreaded Algorithms for Graph Coloring
Catalyurek, Umit V.; Feo, John T.; Gebremedhin, Assefaw H.; Halappanavar, Mahantesh; Pothen, Alex
2012-10-21
Graph algorithms are challenging to parallelize when high performance and scalability are primary goals. Low concurrency, poor data locality, irregular access pattern, and high data access to computation ratio are among the chief reasons for the challenge. The performance implication of these features is exasperated on distributed memory machines. More success is being achieved on shared-memory, multi-core architectures supporting multithreading. We consider a prototypical graph problem, coloring, and show how a greedy algorithm for solving it can be e*ectively parallelized on multithreaded architectures. We present in particular two di*erent parallel algorithms. The first relies on speculation and iteration, and is suitable for any shared-memory, multithreaded system. The second uses data ow principles and is targeted at the massively multithreaded Cray XMT system. We benchmark the algorithms on three di*erent platforms and demonstrate scalable runtime performance. In terms of quality of solution, both algorithms use nearly the same number of colors as the serial algorithm.
A variable step-size NLMS algorithm employing partial update schemes for echo cancellation
NASA Astrophysics Data System (ADS)
Xu, Li; Ju, Yongfeng
2011-02-01
Today, with increase in the demand for higher quality communication, a kind of long adaptive filter is frequently encountered in practical application, such as the acoustic echo cancellation. Increase of adaptive filter length from decades to hundreds or thousands causes the conventional adaptive algorithms encounter new challenges. Therefore, a new variable step-size normalized least-mean-square algorithm combined with Partial update is proposed and its performances are investigated through simulations. The proposed step size method takes into account the instantaneous value of the output error and provides a trade-off between the convergence rate and the steady-state coefficient error. In order to deal with this obstacle that the large number of filter coefficients diminishes the usefulness of the adaptive filtering algorithm owing to increased complexity, the new algorithm employing tap-selection partial update schemes only updates subset of the filter coefficients that correspond to the largest magnitude elements of the regression vector. Simulation results of such applications in acoustic echo cancellation verify that the proposed algorithm achieves higher rate of convergence and brings significant computation savings compared to the NLMS algorithm.
Cognitive Style, Operativity, and Reading Achievement.
ERIC Educational Resources Information Center
Roberge, James J.; Flexer, Barbara K.
1984-01-01
This developmental study was designed to examine the effects of field dependence-independence and level of operational development on the reading achievement of sixth, seventh, and eighth graders. Field dependence-independence had no significant effect on reading achievement, but high-operational students scored significantly higher than…
Mathematics Coursework Regulates Growth in Mathematics Achievement
ERIC Educational Resources Information Center
Ma, Xin; Wilkins, Jesse L. M.
2007-01-01
Using data from the Longitudinal Study of American Youth (LSAY), we examined the extent to which students' mathematics coursework regulates (influences) the rate of growth in mathematics achievement during middle and high school. Graphical analysis showed that students who started middle school with higher achievement took individual mathematics…
Schooling and Achievement in American Society.
ERIC Educational Resources Information Center
Sewell, William H.; And Others
This book is an outgrowth of an interdisciplinary seminar on achievement processes. The 15 chapters of this book are distributed into three substantive sections. Part One includes a series of chapters dealing in one way or another with achievement in the life cycle. One chapter discusses the causes and consequences of higher education and…
Maryland's Achievements in Public Education, 2011
ERIC Educational Resources Information Center
Maryland State Department of Education, 2011
2011-01-01
This report presents Maryland's achievements in public education for 2011. Maryland's achievements include: (1) Maryland's public schools again ranked #1 in the nation in Education Week's 2011 Quality Counts annual report; (2) Maryland ranked 1st nationwide for a 3rd year in a row in the percentage of public school students scoring 3 or higher on…
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
[Achievement of therapeutic objectives].
Mantilla, Teresa
2014-07-01
Therapeutic objectives for patients with atherogenic dyslipidemia are achieved by improving patient compliance and adherence. Clinical practice guidelines address the importance of treatment compliance for achieving objectives. The combination of a fixed dose of pravastatin and fenofibrate increases the adherence by simplifying the drug regimen and reducing the number of daily doses. The good tolerance, the cost of the combination and the possibility of adjusting the administration to the patient's lifestyle helps achieve the objectives for these patients with high cardiovascular risk. PMID:25043543
Dynamic hybrid algorithms for MAP inference in discrete MRFs.
Alahari, Karteek; Kohli, Pushmeet; Torr, Philip H S
2010-10-01
In this paper, we present novel techniques that improve the computational and memory efficiency of algorithms for solving multilabel energy functions arising from discrete mrfs or crfs. These methods are motivated by the observations that the performance of minimization algorithms depends on: 1) the initialization used for the primal and dual variables and 2) the number of primal variables involved in the energy function. Our first method (dynamic alpha-expansion) works by "recycling" results from previous problem instances. The second method simplifies the energy function by "reducing" the number of unknown variables present in the problem. Further, we show that it can also be used to generate a good initialization for the dynamic alpha-expansion algorithm by "reusing" dual variables. We test the performance of our methods on energy functions encountered in the problems of stereo matching and color and object-based segmentation. Experimental results show that our methods achieve a substantial improvement in the performance of alpha-expansion, as well as other popular algorithms such as sequential tree-reweighted message passing and max-product belief propagation. We also demonstrate the applicability of our schemes for certain higher order energy functions, such as the one described in [1], for interactive texture-based image and video segmentation. In most cases, we achieve a 10-15 times speed-up in the computation time. Our modified alpha-expansion algorithm provides similar performance to Fast-PD, but is conceptually much simpler. Both alpha-expansion and Fast-PD can be made orders of magnitude faster when used in conjunction with the "reduce" scheme proposed in this paper. PMID:20724761
Image watermarking using a dynamically weighted fuzzy c-means algorithm
NASA Astrophysics Data System (ADS)
Kang, Myeongsu; Ho, Linh Tran; Kim, Yongmin; Kim, Cheol Hong; Kim, Jong-Myon
2011-10-01
Digital watermarking has received extensive attention as a new method of protecting multimedia content from unauthorized copying. In this paper, we present a nonblind watermarking system using a proposed dynamically weighted fuzzy c-means (DWFCM) technique combined with discrete wavelet transform (DWT), discrete cosine transform (DCT), and singular value decomposition (SVD) techniques for copyright protection. The proposed scheme efficiently selects blocks in which the watermark is embedded using new membership values of DWFCM as the embedding strength. We evaluated the proposed algorithm in terms of robustness against various watermarking attacks and imperceptibility compared to other algorithms [DWT-DCT-based and DCT- fuzzy c-means (FCM)-based algorithms]. Experimental results indicate that the proposed algorithm outperforms other algorithms in terms of robustness against several types of attacks, such as noise addition (Gaussian noise, salt and pepper noise), rotation, Gaussian low-pass filtering, mean filtering, median filtering, Gaussian blur, image sharpening, histogram equalization, and JPEG compression. In addition, the proposed algorithm achieves higher values of peak signal-to-noise ratio (approximately 49 dB) and lower values of measure-singular value decomposition (5.8 to 6.6) than other algorithms.
Predicting Achievement and Motivation.
ERIC Educational Resources Information Center
Uguroglu, Margaret; Walberg, Herbert J.
1986-01-01
Motivation and nine other factors were measured for 970 students in grades five through eight in a study of factors predicting achievement and predicting motivation. Results are discussed. (Author/MT)
Student Achievement and Motivation
ERIC Educational Resources Information Center
Flammer, Gordon H.; Mecham, Robert C.
1974-01-01
Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)
Stability of Bareiss algorithm
NASA Astrophysics Data System (ADS)
Bojanczyk, Adam W.; Brent, Richard P.; de Hoog, F. R.
1991-12-01
In this paper, we present a numerical stability analysis of Bareiss algorithm for solving a symmetric positive definite Toeplitz system of linear equations. We also compare Bareiss algorithm with Levinson algorithm and conclude that the former has superior numerical properties.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2003-12-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
NASA Astrophysics Data System (ADS)
Owen, Mark W.; Stubberud, Allen R.
2004-01-01
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
Feedback algorithm for simulation of multi-segmented cracks
Chady, T.; Napierala, L.
2011-06-23
In this paper, a method for obtaining a three dimensional crack model from a radiographic image is discussed. A genetic algorithm aiming at close simulation of crack's shape is presented. Results obtained with genetic algorithm are compared to those achieved in authors' previous work. The described algorithm has been tested on both simulated and real-life cracks.
An Experimental Method for the Active Learning of Greedy Algorithms
ERIC Educational Resources Information Center
Velazquez-Iturbide, J. Angel
2013-01-01
Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…
Cognitive Style, Operativity, and Mathematics Achievement.
ERIC Educational Resources Information Center
Roberge, James J.; Flexer, Barbara K.
1983-01-01
This study examined the effects of field dependence/independence and the level of operational development on the mathematics achievement of 450 students in grades 6-8. Field-independent students scored significantly higher on total mathematics, concepts, and problem-solving tests. High-operational students scored significantly higher on all tests.…