Академический Документы
Профессиональный Документы
Культура Документы
Introduction
This report will examine a Preliminary Geography assessment task that encompasses the topic
of ‘Biophysical Interactions’ through the format of a fieldwork report. This written report
requires students to undertake fieldwork and utilise geographical inquiry methodologies to
describe the interaction between the four components of a specific biophysical environment.
In addition, students explain the human impacts on each biophysical component, with specific
reference to fieldwork conducted on a specified site of inquiry named as a freshwater river
adjoining the schoolgrounds in a rural area.
-------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------
Year 11 Geography 2019
Assessment Task 1: Fieldwork Report (20 marks)
Due Date: Week 10, Term 1 Thursday 4th April, 2019
Marks: 20
Weighting: 30%
Hand assessment into the assessment box by 9am
Outcomes to be assessed:
● P2. Describes the interactions between the four components which define the
biophysical environment
● P3. Explains how a specific environment functions in terms of biophysical factors
● P9. Uses maps graphs and statistics, photographs and fieldwork to conduct geographical
inquiries
● P12. Communicates geographical information, ideas and issues using appropriate
written or oral cartographic and graphic forms
Criteria Marks
COMMENTS:
_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________
-------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------
Literature Synthesis
Key features of quality assessment design identified in this report through the analysis of
relevant literature include the of appropriateness assessment methodology, the of alignment
assessment criteria to curriculum outcomes, and the explicitness of the task instructions,
rubric and expectations of students, as they pertain to Stage Six Geography assessment.
Assessment Methodology
A quality assessment design requires the assessment methodology utilised to provide a
summative measurement of student performance to be appropriate to the learning outcomes
which are being assessed. Darling-Hammond & Adamson (2010) suggest that whilst formal
and standardised examination methods provide a structured format wherein students can
demonstrate declarative knowledge, they do not reflect the unstructured nature of students
applying higher-order thinking and procedural knowledge in authentic settings outside the
classroom. Moreover, other literature reviewed cited the over-emphasis of this form of
assessment that favours students’ recall and recognition (Tomlinson & McTighe, 2006, p. 61)
and uses narrowed measures of assessment, resulting in a reductionist curriculum (Lingard,
2010). In addition, Biddulph, Lambert & Balderstone (2015) note that this assessment
methodology can distort the pedagogical facilitation of secondary Geography and reduce the
time allocated to fieldwork, meaningful open ended critical and speculative geographic
inquiry. As such, updated assessment and reporting requirements by NESA (2017) for Stage
Six Geography now stipulate a maximum of one formal written examination may be used in
school-based assessment.
Task Design
The task is afforded a degree of accessibility for diverse students by light scaffolding in the
form of clarifying and defining key directive verbs such as ‘explain’ and by providing a report
scaffold to clarify content expectations and provide suggested examples of fieldwork findings.
The task enables a degree of choice in fieldwork findings, images or sketches used to support
students’ explanations of biophysical interactions. However, the format of the report and
location of the fieldwork is relatively fixed in comparison to the Senior Geography Project
(NESA, 2017, p. 6), but also appropriate to ensure consistency of the marking process. This
fixed structure of the assessment design, consistent use of performance descriptors and
curriculum alignment of the task collectively minimise the risk of bias. However, the measures
of student performance in the marking rubric are all qualitative and unweighted, enabling
possible influence of bias in the rubric interpretation and marking methodology. The validity
of the assessment design is evidenced by the overall weighting, format, methodology, and the
emphasis on geographical inquiry and fieldwork. These factors all adhere to NESA
requirements of Preliminary school-based assessments (NESA, 2017).
Marking Rubric
The use of the directive verbs ‘describe’ and ‘explain’ (NESA, n.d.) is consistent throughout the
instructions to students and rubric provided, in addition to being clear alignment with the
relevant syllabus outcomes to be assessed. The rubric does not indicate differentiation
between levels of performance though the use of a spectrum of high and lower-order directive
verbs, instead using key differentiating adjectives as marking descriptors for each criterion.
These are appropriately sequenced from comprehensive to, detailed, sound, basic and
limited, in alignment with the common grade scale for preliminary courses (NESA, n.d.). The
scale of each criterion in the rubric accurately reflects a full range of student performance
within a cohort and the terminology used also aligns to the aforementioned common grade
scale. However, the structure of the rubric amalgamates criteria into single bands as a
measure of performance, rather than a matrix of individual criteria sequenced in by level of
proficiency. This hinders the rubric in differentiating student performance between each
individual criterion as a dimension of the assessment, thus reducing the validity of the rubric
as a means of accurately measuring student achievement across multiple outcomes. This lack
of specificity also extends to the absence of any weightings within the assessment design that
would enable students to determine relative value of each criterion or component. Whilst this
is sufficient to meet NESA assessment guidelines, this there lessens the clarity of the rubric as
a means of providing ‘explicit quality criteria’ (Gore & Ladwig, 2006).
Conclusion
The key principles of effective assessment identified in this report underpin best practice
assessment design an enable the valid and effective measurement of student performance.
This includes appropriate curricular alignment that effectively measures student performance