Gen-AI training

The blog provides the key takeaways from the training offered by EPAM, that UK Finance colleagues took to make good use of LLMs for our work and better understand the challenges that our members face.

The challenge

  • As the financial services sector looks to keep pace with the latest developments in Generative AI, particularly Large Language Models (LLMs), firms need the right skills to maximise the opportunities presented by the technology. Different people will need different skills, but a degree of understanding of the strengths, limits and challenges of this tech will be necessary not just for engineers and AI professionals but also front-line employees using LLM tools, as well as compliance specialists and responsible executives. 

LLM training

  • At UK Finance, we also need to upskill, not only to be able to make good use of LLMs for our work but also to better understand the challenges members face. A group of UK Finance colleagues recently undertook training offered by EPAM. The training covered several topics, including an introduction to LLMs and how they function, prompt engineering, interesting use cases, ethical challenges and techniques for managing possible risks. 

Interesting takeaways

  • How LLMs work – As many people know, LLMs are trained on huge data sets, often scraped from the internet. Likewise, many will know that users ‘prompt’ the LLM with text, for example by asking a question or asking the LLM to produce a paragraph on a certain topic. A key insight is the difference between past AI tools that produced text, and LLMs. Past text-producing AI was only able to think about a couple of words at a time. In contrast, LLMs can ‘consider’ a significant amount of prior text as context when choosing the next word in its output, enabling the production of longer, more coherent responses.  
  • Use cases – Relevant use cases for the financial sector include ‘classic’ tasks such as preparing drafts of marketing, letters or other documents, summarising reports, and generating code. More specialised use cases, incorporating not only LLMs but also other generative AI, include data analysis and interrogation. Such tools include StatGPT, used by the IMF, which allows users to extract economic data from multiple datasets using natural language, provides answers to questions about the data, and can create and interpret charts.
  • Limitations of LLMs – The limitations of LLMs include ‘hallucinations’ (confident but incorrect assertions) in outputs and biases. To mitigate these, developers can take steps like curating more diverse and representative training datasets. Firms deploying LLMs, as well as individual users, can also mitigate risks by taking steps to ensure effective prompting. 
  • Understanding and finetuning ‘prompts’ – Creating effective prompts for LLMs involves crafting clear, specific, and context-rich instructions that guide the model toward producing accurate and relevant responses. There are many techniques that users can employ. Interesting examples include providing examples of good outputs or instructing the LLM to take on a certain role when writing its response, such as acting as a business analyst. Tools are also available to adjust parameters like ‘temperature’ to obtain different levels of creativity and randomness in outputs. More creativity could be ideal for idea generation or brainstorming, while more predictability would often be better for formal letters or other products that need to be anchored in facts. 
  • The ethics of prompt engineering – Cases of AI bias are well known and there are concerns that LLMs can have similar problems, such as reproducing stereotypes. Deployers of LLMs can mitigate risks using techniques like Retrieval Augmented Generation to give the LLM access to the most relevant and representative data, to reduce risk of bias and of hallucination. Firms can also build human-in-the-loop systems where individual experts make the final decisions, using the AI results as a helpful guide rather than having AI make decisions directly. Individual users can avoid biased prompt language, and can provide the LLM with examples of good outputs to model.

Who might benefit from LLM training? 

  • Anyone involved in using, deploying or overseeing LLM tools will benefit from training that explains how they work, use cases, and possible risks. 
  • Compliance and risk professionals in particular will benefit from a good understanding of risks and mitigation techniques.
  • Knowledge of effective prompt engineering will be particularly beneficial for users of LLM tools, as well as compliance and risk professionals preparing internal controls and policies for safe deployment. 

To find out more about EPAMs training that can be provided email to:

LearningPractice@epam.com

Or How EPAM’s DIAL and StatGPT platform could help you use the link here.

https://epam-rail.com/ 

Area of expertise: