Breakfast News on 28 September 2023, was the 3rd AI event that GTI has run over the last 3 years – well before it was elevated in our collective consciousness with the launch of ChatGPT in November 2022. It understandably remains a popular topic in our sector with 150 people registering for the session in London and 800 online.
This is a summary of the content from the event – bringing together the speakers’ content into themes and, in particular, referencing their frameworks to help make sense of the topic. You can access the slides, speaker and student panel videos from the event below.
AI is nothing new. It refers to ‘automated rules-based decisions’ - like a Chatbot or credit score - which have been around for decades. So, what’s changed? It’s the sophistication in Generative AI…the idea of a discussion chat-based interface where you can ask anything using natural conversational phrases or upload images or documents. Information, data, video, audio or images are generated in near real time that resembles ‘human-created’ content. This means that we don’t need to be ‘good at Googling’ or be a programmer to get sophisticated results.
So Generative AI tools can produce new content tailored to the needs of the user or organisation and do so in an iterative and interactive way. Tools like ChatGPT, Claude, Google Bard and Dall-E are leading the way.
If a software provider platform claims to use ‘AI’ (which many now do), it is worth clarifying whether this means it uses automated rules-based decisions (much software can be thought of as using AI in this sense) or Generative AI which is newer in the way it delivers for users.
If AI is increasingly automating routine tasks to support skills development and aspects of the hiring process, all of us in the early careers sector need to develop our skills to become ‘better humans’ in areas that aren’t automatable. In areas like the design and delivery of skills workshops, careers fairs, insight events, assessment centres, candidate experience design (CX), candidate care, coaching and role modelling. There will also be a need for expert human oversight of automated decisions.
The student panel at Breakfast News commented on their use of Generative AI in areas like:
Looking at employer usage, Stephen Isherwood, Co-CEO of the Institute of Student Employers (ISE) highlighted that on recently asking 92 large employers in the UK whether they are currently using AI (meaning automated decisions not Generative AI) the answer is that only 9% are. Finance and Professional Services firms are the most likely to be, followed by Digital and Tech.
This contrasts with the view held by many students, and sometimes our sector, that most large recruiters make automated decisions. If only around 9% of recruiters are making some level of automated decisions, it is likely that a lower percentage are using Generative AI – showing the potential for much more use over time.
ISE has found that the most common uses, in order of usage are to:
AI and Generative AI applications in use, or likely to soon be used, include:
Thanks to a Venero Capital Advisors 2023 Report for some of these use cases.
The fear that AI will eliminate jobs is real – and nothing new. As technology advances, new career options are created. Students are generally aware of the potential of automation and the possibility of job displacement. They also recognise the emergence of new and potentially improved employment opportunities and the need for AI related skills development.
A recent JISC report here highlights that some students are concerned that they are the cohort that is missing out on developing AI skills that they may need for employment. Students are seeking support from educational institutions to bridge the gap.
Understanding the importance of adaptability, one of the Breakfast News student panel explained that in his aerospace engineering degree, the emergence of ChatGPT has encouraged him to choose more technical modules to help future proof himself. But overall, he is excited about AI’s potential.
We’re seeing employers more highly prioritise human skills and a candidate’s ability to work with the help of Generative AI. GTI and Cappfinity have developed the https://skillsforskills.org/ framework to define these skills and the idea that the future of work will be about always learning and adapting.
When considering technical Data Science and AI skills, employers are starting to want higher levels of skills, as AI tools and APIs to them mean that less skills are needed to do the ‘basics’. This has implications for short courses and bootcamps.
GTI is supporting the Department for Science, Industry and Technology (DSIT) and the Office for AI (OAI), working with the Office for Students (OfS), to support diverse students to study AI and Data Science at postgraduate level. This will boost UK innovation and help to reduce the risk of bias in algorithms.
Employers are invited to sponsor one or more students that may otherwise not be able to study at this level. Viscount Camrose, the Parliamentary Under Secretary of State (Minister for AI and Intellectual Property) discussed the scheme (below). To set up a discussion, please leave your details here.
Integrity can be defined as ‘demonstrating a moral compass’. In light of the advances in Generative AI, will integrity in exams, online assessments, and assessment centres now be part of the cultural norm? In other words, should using tools to pass tests be defined as cheating, or not? The answer to this question may evolve over time.
Cappfinity provides the following framework when thinking about cheating in tests. Nicky Garcea, Co-CEO's talk is below.
Deter
Employers often don’t, but can easily, explain to candidates in a well-crafted sentence(s) that they shouldn’t cheat. As one of the student panel said, he avoids cheating as he wouldn’t want to take a job if he wasn’t confident that he had the right skills. Another panelist said that he would appreciate guidance as to whether to use AI and other tools, on an assessment-by-assessment basis.
Educators have offered guidance on plagiarism for some time, so there are lessons there. The majority of students on the panel said they would appreciate more guidance from their university. One student explained that their university has said detailed guidance is a work-in-progress so avoid the use of AI tools for now.
A second approach to deterring cheating is offering ways for students to prepare by practising. When we’re confident in ourselves and feel we’re good enough, we're less likely to cheat.
Design
Engaging, relevant tests help deter cheating – for example job simulations and multi-part tests. Universities have found in plagiarism research that people are more likely to cheat when questions are simple. Cognitive tests like simple English language assessments are the most vulnerable for this reason. The research shows that when students feel an emotional connection with relevant tasks, they are less likely to cheat.
One of the student panel mentioned that law firms are shifting more to video than application forms because ChatGPT is being used to support form completion, making the answers longer and harder to understand the ‘true’ student. We are seeing a return to more face-to-face assessment centres for the same reason.
Detect
Risk scores can be calculated for each candidate to indicate the likelihood of that candidate having cheated. The methodology should be tailored to your processes and industry sector. Tristan Mathieson, from GTI’s Data Team, showed Cappfinity’s integrity dashboard in his session. Detecting risks need to be considered along with what the next steps should be, and legal advice should be sought.
Clearly Generative AI can be a force for good. We've seen people take up sports like running, reporting back high levels of motivation through the feeling of having someone ‘take an interest in me’ even though it’s an AI bot. This has implications in areas like careers guidance, new starter onboarding and L&D.
But there can also be problems with Generative AI. For example, a young person asking ‘do I have the right skills for this job’ to a badly written Generative AI product. Or unwanted negative bias in a selection algorithm trying to find and match candidates that “appear” to be like current employees rather than looking for a wider and more diverse set of skills and potential.
In October 2022, just before ChatGPT hit the headlines, academics at Cambridge University warned that AI in some recruitment platforms was discriminating against people who wear glasses or sit in front of bare walls and urged companies to stop relying on "pseudoscientific" software. .
Patricia Shaw discussed the ethics and governance of AI below, encouraging us all to deploy AI responsibly using the following framework:
AI ethics risk management and governance is not a zero sum game. AI can be used to produce fairer outcomes and respect privacy, as part of a flourishing HR ecosystem. It pays to be transparent with your users about your use of AI, giving them the benefit of an explanation when they need it and to ensure that you have the appropriate competencies and accountability structure in place to provide meaningful human oversight and to listen to your feedback. Deploying AI responsibly does have a positive real-world impact on your stakeholders.
Early careers market update
Experiences & thoughts around AI
AI in action. Practical use cases for employment and graduate outcomes
Deploy responsibly - the opportunities and pitfalls of AI in Recruitment and Retention
The Psychology of candidate integrity
How the UK is harnessing AI and Data Science to build a better future
Not business as usual. AI and the future of jobs.
© 2023 Group GTI All rights reserved. Privacy Policy