Large Language Models (LLMs) are revolutionizing the field of artificial intelligence by enabling unprecedented advancements in natural language understanding, generation, and interaction, redefining how we process data and communicate with technology. But LLMs come with many challenges and pitfalls such as data privacy, model bias, limited knowledge, hallucination and cost. In this workshop, we will introduce the foundational concepts behind LLMs so that you can have a deeper understanding of their inner workings. You will gain hands-on experience building automated pipelines using LangChain to manage complex LLM applications across a variety of problems. We will also cover strategies for validating and testing LLM outputs on real-world datasets, ensuring accuracy and robustness in practical applications.
Keywords: LLM, ChatGPT, Natural Language Processing
Requirements: Please bring a laptop with Python installed. Familiarity with Python programming is highly recommended, as we will be working with code throughout the workshop. Additionally, having an account with OpenAI or Anthropic, along with an API key and sufficient credit, will be helpful for hands-on practice with live models. Prior experience experimenting with prompts on a Large Language Model (LLM) is beneficial but not required, as the workshop will guide you through the basics.
Relevance: This workshop will be most relevant to researchers, students, and bioinformaticians who regularly work with large, complex datasets and need to automate text processing tasks.
Dr Robert Turnbull
Senior Research Data Specialist, Melbourne Data Analytics Platform (MDAP), The University of Melbourne
Robert Turnbull previously worked for Monash Cluster Computing where he was responsible for developing the Geodynamics modelling program Underworld. In 2020, he completed his PhD using Bayesian phylogenetics to study the transmission history of Arabic manuscript texts from the Middle Ages. He is now a Senior Research Data Specialist at the Melbourne Data Analytics Platform where he collaborates with researchers across the University of Melbourne in data intensive research projects. During this time Robert has developed Deep Learning models which have won international academic competitions in reading Greek papyri and interpreting medical imaging.