Вы находитесь на странице: 1из 6

Assessment Report Patrick Griffin – 19812351 - 2167 words

Introduction
This report will examine a Preliminary Geography assessment task that encompasses the topic
of ‘Biophysical Interactions’ through the format of a fieldwork report. This written report
requires students to undertake fieldwork and utilise geographical inquiry methodologies to
describe the interaction between the four components of a specific biophysical environment.
In addition, students explain the human impacts on each biophysical component, with specific
reference to fieldwork conducted on a specified site of inquiry named as a freshwater river
adjoining the schoolgrounds in a rural area.
-------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------
Year 11 Geography 2019
Assessment Task 1: Fieldwork Report (20 marks)
Due Date: Week 10, Term 1 Thursday 4th April, 2019
Marks: 20
Weighting: 30%
Hand assessment into the assessment box by 9am
Outcomes to be assessed:
● P2. Describes the interactions between the four components which define the
biophysical environment
● P3. Explains how a specific environment functions in terms of biophysical factors
● P9. Uses maps graphs and statistics, photographs and fieldwork to conduct geographical
inquiries
● P12. Communicates geographical information, ideas and issues using appropriate
written or oral cartographic and graphic forms

Task: Fieldwork Report (1200-1500 words)


Explain the human impacts on the functioning of the atmosphere, hydrosphere, lithosphere
and biosphere.
Explain- relate cause and effect; make the relationships between things evident; provide why
and/or how
. In your answer you must include the following points:
● A description of the four components of the biophysical environment
● An explanation of the human impacts
● Examples from your fieldwork
● Diagrams, photos and field sketch
● A comprehensive bibliography of at least 4 sources
● Use Report format such as headings, sub headings, dot points
Marking Guidelines

Criteria Marks

● Constructs a comprehensive description of all components of the


biophysical environment using relevant terminology and concepts
● Presents a sustained, logical and well-structured explanation of the 17–20
impacts of human activities
● Provides comprehensive examples collected from fieldwork
● Provides comprehensive and very clearly labelled diagrams,photos and
sketches
● Uses a sophisticated report format
● Provides a comprehensive bibliography
● Constructs a detailed description of all components of the biophysical
environment using relevant terminology and concepts
● Presents a sustained and structured explanation of the impacts of human 13–16
activities
● Provides detailed examples collected from field work
● Provides clear and labelled diagrams,photos and sketches
● Uses a competent report format
● Provides a detailed bibliography
● Constructs a sound description of all components of the biophysical
environment using relevant terminology and concepts
● Presents a sound explanation of the impacts of human activities 9–12
● Provides sound examples from field work
● Provides some diagrams, photos and sketches
● Uses a satisfactory report format
● Provides a sound bibliography

● Constructs a basic description of all components of the biophysical


environment using relevant terminology and concepts
● Presents a basic explanation of the impacts of human activities 5–8
● Provides some basic examples
● Attempts to provide basic diagrams or photos or sketches
● Attempts to use a report format
● Provides a basic bibliography
● Limited or no description of components
● Limited or no explanation of the impacts of human activities
● Limited or no examples 1–4
● No diagrams, or sketches
● Limited use of a report format
● No bibliography

COMMENTS:

_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________
_____________________________________________________________________________________________________________________

-------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------
Literature Synthesis
Key features of quality assessment design identified in this report through the analysis of
relevant literature include the of appropriateness assessment methodology, the of alignment
assessment criteria to curriculum outcomes, and the explicitness of the task instructions,
rubric and expectations of students, as they pertain to Stage Six Geography assessment.

Assessment Methodology
A quality assessment design requires the assessment methodology utilised to provide a
summative measurement of student performance to be appropriate to the learning outcomes
which are being assessed. Darling-Hammond & Adamson (2010) suggest that whilst formal
and standardised examination methods provide a structured format wherein students can
demonstrate declarative knowledge, they do not reflect the unstructured nature of students
applying higher-order thinking and procedural knowledge in authentic settings outside the
classroom. Moreover, other literature reviewed cited the over-emphasis of this form of
assessment that favours students’ recall and recognition (Tomlinson & McTighe, 2006, p. 61)
and uses narrowed measures of assessment, resulting in a reductionist curriculum (Lingard,
2010). In addition, Biddulph, Lambert & Balderstone (2015) note that this assessment
methodology can distort the pedagogical facilitation of secondary Geography and reduce the
time allocated to fieldwork, meaningful open ended critical and speculative geographic
inquiry. As such, updated assessment and reporting requirements by NESA (2017) for Stage
Six Geography now stipulate a maximum of one formal written examination may be used in
school-based assessment.

In contrast, quality performance assessment design requires multiple measures of student


performance that provide a more comprehensive and relevant reflection of understanding
and procedural knowledge that students apply in a range of formats or authentic settings.
These higher-order learning outcomes can be facilitated through a task design wherein
assessment procedures are integrated into the learning process in meaningful activity through
‘assessment as learning’ and performance assessment. Such assessment design is identified
by Tomlinson & McTighe (2006) as a key principle of effective assessment (p. 61), in addition
to facilitating opportunities for meaningful student reflection, self-assessment and feedback.
The suitability of a piece of assessment in providing prompt, meaningful and comprehensive
feedback for students is a fundamental feature of quality assessment design identified in
literature, and posited by Hattie (2009, p. 173) as amongst the more effective methods of
improving student learning outcomes and promoting cognitive growth. For secondary
Geography students, examples of such assessment activity can include case studies with
primary research and authentic fieldwork. Biddulph, Lambert & Balderstone (2015) note that
performance assessment formats in Geography must be fit for purpose and adhere to the
curriculum and targeted learning outcomes that are being assessed. This adheres to the
principles of effective assessment outlined by NESA (n.d.), which state that the method of
assessment must provide opportunities for students to demonstrate ability in a range of task
types that are a valid instrument to assess targeted outcomes.

Criteria & Weighting Alignment to Curriculum


The criteria outlined and weighting of each criterion within assessment design is fundamental
to ensuring the validity and reliability of the task as a measure of student performance. NESA
prescribes that these criteria must be standards-based and align to targeted syllabus
outcomes (NESA, n.d.). This curriculum alignment extends to assessment criteria that
encompasses mandated Stage Six Geography requirements, such as the use of geographical
inquiry methodologies, including tools, skills, research, and fieldwork (NESA, 2017). Darling-
Hammond & Adamson (2010) espouse that quality assessment design considers the student
cognition required in specific domain, then establishes criteria based on a definition of
competent performance (pp. 23-24). In order to measure a spectrum of differing levels in
student proficiency, structuring criteria in a weighted rubric framework facilitates the
differentiation of student performance in an accurate manner, whilst ensuring the consistency
and fairness of marking procedures and the continued alignment to curriculum objectives.

Communication, Clarity & Expectations


Quality assessment design requires the effective communication of the assessment rationale,
criteria and expectations of students with explicitness and clarity. The clarity of these
elements of assessment design ensures the validity and consistency of marking procedures by
explicitly targeting specific assessable domains. In addition, this provides ‘explicit quality
criteria’ (Gore & Ladwig, 2006) for students to comprehend the expectations of the task and
ensure the reliability and fairness of the assessment process. Tomlinson & McTighe (2006)
assert that a quality rubric is key to facilitating meaningful feedback for students (p. 78) by
providing a comprehensive framework of feedback for student reflection and cognitive
students learn to apply the findings of fieldwork activity to describe the interrelation between
biophysical components of an environment explain human impacts as they pertain to the
fieldwork site. The premise of this approach to performance assessment appears to enable
students to contextualise their knowledge of geographical issues through authentic
experiences to form ‘deep understanding’ (Gore & Ladwig, 2006), demonstrating the
meaningful integration of assessment into learning activity. To this end, the intent of the
assessment facilitates ‘assessment as learning’ and the application of procedural knowledge
that students apply in authentic settings, as identified in literature (Tomlinson & McTighe,
2006) and aligns with the aims and objectives of the Stage Geography syllabus (Board of
Studies, 2009). However, the purpose of the task ultimately lacks clarity and explicitness
through the absence of a written rationale that would otherwise communicate this underlying
intent of the task to students. Subsequently, this absence reduces the effectiveness of the task
as a means of best practice assessment.

Task Design
The task is afforded a degree of accessibility for diverse students by light scaffolding in the
form of clarifying and defining key directive verbs such as ‘explain’ and by providing a report
scaffold to clarify content expectations and provide suggested examples of fieldwork findings.
The task enables a degree of choice in fieldwork findings, images or sketches used to support
students’ explanations of biophysical interactions. However, the format of the report and
location of the fieldwork is relatively fixed in comparison to the Senior Geography Project
(NESA, 2017, p. 6), but also appropriate to ensure consistency of the marking process. This
fixed structure of the assessment design, consistent use of performance descriptors and
curriculum alignment of the task collectively minimise the risk of bias. However, the measures
of student performance in the marking rubric are all qualitative and unweighted, enabling
possible influence of bias in the rubric interpretation and marking methodology. The validity
of the assessment design is evidenced by the overall weighting, format, methodology, and the
emphasis on geographical inquiry and fieldwork. These factors all adhere to NESA
requirements of Preliminary school-based assessments (NESA, 2017).

Marking Rubric
The use of the directive verbs ‘describe’ and ‘explain’ (NESA, n.d.) is consistent throughout the
instructions to students and rubric provided, in addition to being clear alignment with the
relevant syllabus outcomes to be assessed. The rubric does not indicate differentiation
between levels of performance though the use of a spectrum of high and lower-order directive
verbs, instead using key differentiating adjectives as marking descriptors for each criterion.
These are appropriately sequenced from comprehensive to, detailed, sound, basic and
limited, in alignment with the common grade scale for preliminary courses (NESA, n.d.). The
scale of each criterion in the rubric accurately reflects a full range of student performance
within a cohort and the terminology used also aligns to the aforementioned common grade
scale. However, the structure of the rubric amalgamates criteria into single bands as a
measure of performance, rather than a matrix of individual criteria sequenced in by level of
proficiency. This hinders the rubric in differentiating student performance between each
individual criterion as a dimension of the assessment, thus reducing the validity of the rubric
as a means of accurately measuring student achievement across multiple outcomes. This lack
of specificity also extends to the absence of any weightings within the assessment design that
would enable students to determine relative value of each criterion or component. Whilst this
is sufficient to meet NESA assessment guidelines, this there lessens the clarity of the rubric as
a means of providing ‘explicit quality criteria’ (Gore & Ladwig, 2006).

Feedback & Grading


The Rubric addresses each individual criterion on a scale of performance using appropriate
terminology for students’ comprehension, as posited by Darling-Hammond & Adamson
(2010). Whilst clear terminology is used and provides a scaled measure of student
performance across criteria that is comprehensive, due to the aforementioned amalgamation
of criteria into unified marking bands, the rubric lacks the explicitness and structural dynamic
needed to convey differentiated results with which of students can interpret meaningful
feedback. This feedback is essential to the pedagogical value of ‘assessment as learning’
(Hattie, 2009) in order for students to be able to identify and isolate specific skills or
understanding that require improvement for the purpose of reflection for meaningful
cognitive growth.

Conclusion
The key principles of effective assessment identified in this report underpin best practice
assessment design an enable the valid and effective measurement of student performance.
This includes appropriate curricular alignment that effectively measures student performance

Вам также может понравиться