An interesting title for the first blog post and one which I chose for two reasons. Firstly, “Expectations” was the focus of our first pilot training session with students from Chemistry and CEAS. Secondly, our need to have clear expectations as a group of people driving towards a common goal for Graduate Teaching Assistant (GTA) training.
Setting expectations is so important when working in learning & teaching, especially in HE. Just as a physiotherapist relies on clients doing their exercises between appointments to achieve steady health improvement, we need our students to understand that it is their input, as well as ours, which builds their knowledge of a subject. Mismatched or unmet expectations are often the source of problems in HE; lecturers failing to appreciate that their students don’t know how the exam will be set or students failing to engage with course materials outside of lectures. Whatever the scenario, mismatched expectations lead to frustration and feelings of mistrust on both accounts.
One of the most common sources of such frustration arises from the relationship between GTAs and undergraduates, especially where those GTAs are postgraduates. Undergraduates often see postgraduate GTAs as near-peers (as they rightly should) and assume that they will therefore be willing (and able) to answer all questions as well as be generous in their marking. This puts our GTAs in a comprised position; if they give in to the pressures of the undergraduate, they will have failed in their duties to their teaching team. Yet if they maintain fair standards in their marking and try to get students to think for themselves (as we no doubt wish them to), they will have to face the frustrations of the student. So, what more pertinent subject to focus our first training session than “Expectations & Dealing with Novice Scientists”?
Engaging participants pre-session (a flipped classroom model)
The session ran as a discussion workshop with some pre-attendance material to work through. Before arriving, the attendees were asked to comment on their expectations of their own role, as a GTA and their motivations for doing the role. They were also asked to reflect on what the students’ expected of them and their concerns for fulfilling the role. (More on these results in another blog post!). Suffice to say, of the 120 GTAs signed up, 116 completed the survey.
Creating an active learning environment
We had about 40 students signed up per session, of which about 25 turned up, so a 60 % turn out. During the session, the attendees were asked to rank the importance of aspects of their role and match these against student expectations. This was followed by a discussion of where gaps existed; what roles were GTAs to carry out which students didn’t expect and what expectations were students likely to have that GTAs could not deliver?
Recurrent themes that arose were the duty of a GTA to maintain a safe and professional working environment in laboratories, which students might not appreciate, and the role of a GTA to get the students to think critically and answer their own questions. In terms of student expectations, the issue of students’ wanting all their questions answered and wanting the GTA to do the work for them, was foremost in the discussion, closely followed by concerns over students expecting “good” marks, irrespective of their performance.
The final part of the session was spent briefly discussing how GTAs might overcome some of these issues, by setting expectations early, by being a role model (asking questions of the students and starting conversation) and the importance of positive feedback to reinforce the value of safety, professionalism and rationalise marks. It should be noted that we completed all of this in an hour and fifteen minutes.
First session feedback
During the sessions, the attendees were receptive, there was lots of busy discussion and the vast majority of people were engaged. Tom took a photo as evidence:
Feedback from the session was collected via a “3 minute paper” asking the attendees what was good, bad and missing. (A pdf version of the file can be accessed on the GTA Training Blackboard page.) The first sessions feedback revealed three key issues:
- the slides were too white and uninteresting (a quick win for the second session);
- the attendees wanted more scenarios to discuss;
- and some wanted a round up or handout.
The second session feedback picked up on the scenarios again and the issue of handouts (though no-one commented on the slides!). Interestingly, whilst the first group mentioned “repetition” and “less discussion, more teacher-focussed” activities the second group requested “more discussion”. This correlates loosely with more veteran GTAs attending the first session; perhaps they didn’t feel they needed to discuss so much. This was also evident in the number of comments made during their discussions – the newbie GTAs populating each discussion with more topics.
Reflections on the first session
In light of the first time running this session, I would reflect on a couple of key areas. Firstly, more could be made of the “flipped” nature of this session. This was a real strength: it kept the face-to-face session short because attendees arrived with pre-formed ideas of what a GTA role was or what students expect and thus allowed deeper conversations about the inter-linking of these themes. Alongside the survey, it would be ideal if the students could engage with diversity and equality training, and unconscious bias training. Additionally, it would make sense to include some information about how to make feedback SMART. This could then be utilised in a workshop activity, where attendees were given scenarios to discuss – small groups could discuss 1 or 2 scenarios each and then report back to the group. Finally, the first steps of a reflective account could be introduced – where the attendees complete a “working with people risk assessment” which will encourage them to pick the key scenarios or issues they think they will need to deal with and collate some of the ideas from the session into ways in which they might deal with those problems.
In conclusion, as a first pass, this session was successful. I forgot to mention but the vast majority of feedback was very positive – many people had nothing to add in the “bad” or “missing” sections and all mentioned something positive, generally along the lines of “I found the discussion very helpful”.
Regarding the future of these training sessions, I think we can take these lessons and set a framework for the development of sessions. We now have a model for what works (and what doesn’t) and the next rationale step will be to build a framework that others can utilise to help structure their sessions.
Watch this space for a future post on the “Assessment & Feedback” session run by Tom Rodgers.
by Dr Jenny Slaughter, School of Chemistry
(Apologies for the grammar and spelling errors.)