Coming to Melbourne this conference series will provide fresh perspectives on assessment in the challenging educational landscape. Sessions will include leading research, practical tips and networking opportunities for educators at all stages of their career.
Date & Venue
22 March 2019
The City Convention Centre, Victoria University
Level 12, 300 Flinders Street
Melbourne VIC Australia
Keynote - ‘Teacher assessment literacy: Are we there yet?’ | Dr Dennis Alonzo
Teacher assessment literacy is widely seen as the centrepiece of effective learning and teaching. Despite its prominence, it is an ill-defined construct due to competing conceptualisations internationally and nationally of what is the most important assessment knowledge and skills for teachers. In this talk, we argue that teacher assessment knowledge and skills are inherently context-based and need to be defined based on the socio-cultural and educational factors that influence assessment for effective learning. We will highlight the multidimensional aspects of teacher assessment literacy that require them not only to identify, develop and implement assessment strategies to collect and analyse data to make highly contextualised and trustworthy decisions to effectively support student learning, but more importantly to work closely with students to prepare them to better engage in and monitor their learning. We will conclude by emphasising the critical roles of teachers in meeting the assessment information needs of school leaders, colleagues and parents/carers to help these stakeholders actively contribute to improving student learning.
Differentiation can best be summarised as recognising that all learners have differences in their learning abilities, whether it be in their learning progress in literacy and numeracy skills or other areas in the curriculum. Differentiation can be a very difficult topic to tackle in daily practice, and can become increasingly difficult implementing in assessments.
An incredibly useful tool for differentiation is the rubric, for both teachers and learners. The benefits of a well set up, specific rubric far outweigh the challenges. A well-structured rubric with clear task values makes it very easy to determine what skills are needed, can be improved on, or how challenges can be implemented to further learning outcomes.
This session will focus on how the matrix of a rubric is set up in a manner that allows for differentiation and how this can be used for a variety of tasks in the classroom.
It has long been realised that although learning is a combination of both passive and active learning, student outcomes are improved when they are cognitively engaged in active learning or higher order thinking. Cognitive rigour can be defined as the amount of time that students spend engaged in those higher order thinking activities that require the transfer of knowledge (Paige, Sizemore & Neace, 2013).
Cognitive taxonomies, such as Bloom’s revised taxonomy, Webb’s Depth of Knowledge (DoK) and Karin Hess’ Cognitive Rigor Matrix (CRM), give us a means to evaluate the level of thinking that students are engaged in. As Hess’ CRM takes the best from both Bloom’s cognitive dimensions and Webb’s Depth of Knowledge, it provides a detailed structure that can assist teachers in planning activities that encourage and scaffold higher order thinking in their classrooms.
As well as deciding on the breadth and depth of the cognitive thinking that their students are engaging in, teachers are also expected to create or evaluate and/or report on assessments that assess these forms of cognition. For this reason, it is vital for teachers to understand and appropriately use a range of cognitive verbs, especially with recent changes in senior syllabi and the focus on 21st century learning skills.
Evolving views about assessment validation have been evident in research literature since the formal conception of validity in educational assessment, nearly a century ago.
Traditionally, the approach to validity mainly focused on ensuring that a test ‘measured what it was supposed to measure’. One weakness of this approach is the elusive search for the ‘gold standard’ to determine the validity of a new test, despite employing various techniques in the collection of validity evidence.
About three decades ago, researchers continued to expand the theory of validity, to include the ‘social consequences’ of a test. Arguably, evaluating the ‘social consequences’ would later form the foundation of understanding validity and influence the design of frameworks for articulating assessment validation. In recent times, the field of educational assessment has witnessed some fascinating discussions on assessment evaluation, particularly from classroom-based assessment researchers.
The goal of this paper is to synthesise these stimulating ideas about assessment evaluation and propose a teacher-based assessment validation framework which can influence the design, development and evaluation of assessment programs. In addition, the paper conceptualises “teacher assessment validation literacy” and “teachers as sources of evidence” as important phenomena in the articulation of classroom-based assessment validation.
Have you given feedback to students on their assessment results with the best intentions but received an unexpected or negative response from your students or their parents? Students require timely, effective feedback from their teachers in order to understand their own assessment results and to improve their study habits, but it isn’t always taken as helpful advice. Giving the ‘Goldilocks’ feedback – comments that are not too harsh, but are also not too vague – can be achieved! This workshop aims to provide participants with an overview of effective ways to give feedback to improve student learning. The session will explore some examples of giving effective (and not so effective) feedback taken from research and the presenter’s own practice.
UNSW Global invites all delegates to participate in the roundtable discussion. It is an opportunity to network with colleagues and share thoughts on the ideas presented in the conference. With an educational landscape that has been focused on measuring achievement, we invite delegates to share their ideas on whether progress can be measured and how it can be done.
The Assessment Evolution Conference panel features experienced educators, educational researchers and assessment specialists. They will tackle the challenging question of assessment best practice both in its current state and future state. What changes need to be made in assessment practices that will ensure that the data that is gathered on students is meaningful and is best placed to serve learning and teaching programs?
Prof. Chris Davison
School of Education, University of New South Wales (UNSW).
Dr. Dennis Alonzo
School of Education, University of New South Wales (UNSW).